In 2024, sequire was represented at numerous conferences, panels, and workshops. A few examples: Julia Masloh, AI Solution Engineer, joined a panel at the State Chancellery discussing “Artificial Intelligence vs. Real Communication.” Our CEO Dr. Christoph Endres gave over a dozen talks on security vulnerabilities in generative AI, as well as presentations on the NIS-2 directive. Together with software engineer Elizabeth Pich, we also led our internal innovation day on AI and facilitated a workshop at East Side Fab on the opportunities and risks of artificial intelligence.
Across nearly all formats, one thing was clear: pressing questions around the introduction of AI, the security of large language models (LLMs), and new regulations like NIS-2 were top of mind.
2024 was a year of listening, explaining, and reflecting.
This article summarizes our most important insights and recommendations.
Generative AI was undoubtedly one of the most talked-about topics in 2024. Ever since the launch of ChatGPT in late 2022, interest has remained high—yet the level of understanding often lags behind.
In many organizations, there’s a strong desire to “do something with AI.” But often missing are a clear strategy, a clean and usable data foundation and a concrete use case that delivers real value.
What we consistently see in our workshops and conversations:
“AI” is often used as an innovation label—without solving a real problem.
We therefore work with participants to clarify:
Additionally, many fundamental terms are still frequently confused. Distinguishing between Artificial Intelligence (AI), Machine Learning, and Generative AI is just as important as understanding the differences between descriptive, predictive, and generative use cases.
Quick Overview: AI, Machine Learning, and Generative AI
Artificial Intelligence (AI)
Umbrella term for systems that perform tasks requiring human intelligence (e.g. language, decisions, perception).
Machine Learning (ML)
Subfield of AI where algorithms learn patterns from data to make predictions or decisions.
Generative AI (GenAI)
Specialization within AI focused on generating new content such as text, images, or code.
Conclusion: The hype is understandable—but long-term value only comes with a clear goal.
A frequently overlooked yet effective use case: Capturing and sharing internal knowledge in a structured way.
Our own example: We use a custom RAG-based chatbot named STACY internally. It helps us consolidate our expertise on AI security and make it easily accessible—for both new and existing team members.
Quick Overview: Retrieval-Augmented Generation (RAG)
A method that combines generative AI with trusted documents—providing responses that are both natural and fact-based.
But here’s the challenge: AI initiatives rise or fall with the data. And this is where we often see issues: Outdated, incomplete, or unstructured data undocumented processes and unclear responsibilities.
And sometimes — and this is part of the truth — traditional solutions without AI are the better choice. Conversations about AI can be a valuable opportunity to reflect on overall process optimization.
While most organizations have developed a sense of responsibility around data protection, the same can’t yet be said for AI-related risks.
Common statements we hear in our workshops:
But these assumptions fall short.
Prompt Injection, Model Misalignment, and tampered training data are risks that exist regardless of hosting model.
These risks will only become more relevant—especially with the upcoming EU AI Act.
The topic of AI safety is here to stay: Either through incidents or through regulation and legal requirements.
Quick Overview
Prompt Injection
Attackers inject manipulated inputs to provoke undesired AI behavior.
Mode Misalignment
The AI behaves in a way that diverges from intended goals or user expectations.
EU AI Act
The first comprehensive EU regulation for AI, using a risk-based classification to ensure transparency, safety, and accountability. The Act distinguishes between four levels of risk: prohibited AI – e.g. social scoring, mass surveillance, high-risk AI – e.g. HR tools, infrastructure monitoring, limited risk – e.g. chatbots, recommendation systems and minimal risk – e.g. spam filters, games.
2024 taught us one thing above all: The desire to work with AI is strong—but without strategy, it goes nowhere. Security risks like Prompt Injection and Model Misalignment are still vastly underestimated. Successful AI projects begin with the right questions—not with the tool. And sometimes, the right solution isn’t AI at all.
As the saying goes: “Never change a winning team.”
So in 2025, we’ll continue doing what works best:
Keynotes and panels on AI & security
Interactive workshops on the NIS-2 directive, AI strategy & process evaluation
Tailored consulting on feasible, high-impact use cases
Because sustainable innovation doesn’t come from hype—it comes from clarity, understanding, and realistic thinking.
CHRISTOPH ENDRES
CEO
sequire technology