Making AI real with the Groq LPU inference engine
The current options for deploying GenAI are slow and expensive, and cannot support real-time inference, including running AI at conversational speeds. However, Groq's LPU inference engine offers a fundamentally new type of inference solution, with a performance speed that is 10 times faster than standard models. Join this session to learn how Groq's innovative approach opens the door to a new class of real-time AI solutions that will transform organisations and solve big challenges.
Speakers
![Jonathan Ross](https://web-summit-avenger.imgix.net/production/avatars/original/52e9943600761a1ea310a46cdd9b86f363c2daa5.png?ixlib=rb-3.2.1&auto=format&fit=crop&crop=faces&w=300&h=300)
Jonathan Ross
Founder & CEOGroq![Mohamed Taha](https://web-summit-avenger.imgix.net/production/avatars/original/7fefcbfc1bcce6128aeedf713d91b82e8d073b69.jpeg?ixlib=rb-3.2.1&auto=format&fit=crop&crop=faces&w=300&h=300)
Mohamed Taha
Senior CorrespondentTopics
AI and machine learningDesignHardware and robotics