As organizations explore ways to harness artificial intelligence, including the large language models that power generative AI, it’s essential to be prepared for both “misfires” and security risks. AI tools’ capacity for bias and returning false or misleading information (a.k.a. “hallucinating”) necessitates careful training and prompting. Further, enterprise AI use cases often rely on interfacing with essential systems and accessing sensitive data, so robust security controls are critical.
It’s important to be ever-mindful that no model is flawless and that multiple security vulnerabilities can come with the many benefits. Below, 20 members of Forbes Technology Council share tips to help organizations account for the abilities, limitations and security challenges of AI.
Understand The Information Customer-Facing Chatbots May Expose
Guard your brand’s reputation. By their very nature, LLMs in their current architecture will always hallucinate. You can try to limit it by fine-tuning and retraining, adjusting and adding guardrails, but there is only so much you can do. Businesses should take this into account, especially when exposing solutions to external customers. If prompted correctly, chatbots will lie, promote competitor solutions, misbehave and so on. – Pawel Rzeszucinski, Team Internet Group PLC
Team Internet plc (LON:TIG) – formerly CentralNic – is a global internet solutions group headquartered in London. Leveraging world-class technologies and industry leading teams, they have been transforming the way organisations, brands, publishers and consumers connect and thrive online.