House 337 has launched an AI ethics committee and AI guidelines to regulate the use of this technology as it continues to grow.
The global creative shop is calling on other agencies and companies to be aware of their own use of AI and consider their approach to AI ethics.
The committee and guidelines aim to educate people on AI challenges and opportunities as well as protect human creativity.
The guidelines include human oversight of AI tools to ensure the technology is not biased, the ethical use of audio, video and imagery without infringing copyright, ensuring how AI can support human creativity rather than replace it, informing clients and customers when their data is being collected and used by AI systems, training for all people working with AI, and a “red flag” process for people to raise anonymous concerns.
The committee is made up of House 337 leadership team members and AI SMEs across the agency and is advised by Next 15 AI, data and legal specialists. They will oversee the use of AI and monitor the types of brands House 337 works with to ensure they are making a positive impact.
The ethical guidance was led by House 337's Kim Lawrie, head of creative and emerging technology, and Matt Rhodes, chief strategy officer.
Lawrie said: “There’s plenty of hand-wringing in this field but very little practical advice. It’s more than time that we lead the charge to support clients and the wider business community with best practice examples.
“AI is nothing to be afraid of, and we are committed to open education around AI tools so that everyone can be as excited as we are about these technologies and how they can change business for the better.”
The shop spent six months working on its guidelines and asked for feedback from every member of staff throughout the process.
Phil Fearnley, group chief executive of House 337, added: “It’s one thing to talk about the need for ethical frameworks and another to put these systems in place. It’s not hard to know the difference between good and bad practices when it comes to working with AI, and there are many experts that agencies can turn to for advice.
“It takes much longer to wait for regulation and leaders to come and slap you on the wrist, but that’s an expensive bet. It’s much easier to get your moral philosophy together as a business, talk to the people you work with and create a system that works cleanly for everyone.
“When we know what we are doing and what the boundaries are, everyone can work freely and safely with AI, allowing us to make much more creative and exciting work.”