Are you integrating Generative AI into your IT landscape?

Here is what to consider!

The renewed public interest in AI, particularly Generative AI, at the end of 2022 was particularly exciting for us at sequire technology. On the one hand, we’ve always had an affinity for AI, and on the other hand, we have been the first to highlight glaring security issues when using Large Language Models (LLMs) in applications.

We embrace AI despite warning of risks

This topic has brought us a tremendous amount of attention:

– Numerous press reports.

– A warning from the German Federal Office for Information Security (BSI) explicitly based on our work.

– An invitation to the world’s largest cybersecurity conference, Black Hat, in Las Vegas with over 20,000 attendees.

– The first place in the OWASP Top 10 for Large Language Models.

– The Best Paper Award at AISEC.

From our perspective, specializing in the security of Generative AI was a logical next step. We are interested in the activities of our colleagues from N4 and 4PACE and are happy to provide advisory and practical support when it comes to securing systems.

Despite our extensive warning about the dangers of an intrinsic vulnerability within language models that cannot quickly be addressed, we still advocate for using this new technology. Security issues are a challenge but not a show-stopper. After all, we have lived with spam emails, computer viruses, ransomware, and software and operating system vulnerabilities for decades. Nevertheless, we have not given up the use of computers.

5 Questions for Decision Makers

Here are a few questions we’d like to provide to all decision-makers who are considering the use of AI in their business landscape:

1. Does the solution I intend to apply actually fit my problem?

2. Do I understand what AI does in my company and where its limitations lie?

3. Which data sources does the AI access? Do I have complete control over them? Are there other actors who could control or manipulate the data sources?

4. How much do I rely on the output of AI systems? Does a human recheck the result? What happens if there are incorrect outputs?

5. Does the output of my system include sensitive data? Could this data be leaked and cause harm or compromise my intellectual property?

christoph_endres.png

DR. CHRISTOPH ENDRES
CEO
sequire technology

Other articles that might be interesting for you