Lakera Guard protects LLMs from prompt injection and other risks
Lakera launched with $10M in funding that it plans to invest in protecting enterprises from some of the best-known LLM exploits, such as prompt injection and hallucinations. Furthermore, as part of their work on AI security, the startup co-founders served as advisors for the EU AI Act.