AI infra & open-source models #1
We're hosting our first dstack user group event in Berlin and inviting AI engineers, researchers, and infrastructure experts interested in AI infra and open-source models. Join us for lightning talks, demos, and focused discussions.
This event designed to bring together individuals deeply involved in the AI space to exchange ideas, share technical insights, and connect.
Schedule:
6:00pm – Arrival
6:30pm – Lightning talks and demos (Part 1)
7:15pm – Break
7:30pm – Lightning talks and demos (Part 2)
8:15pm – Networking with food & drinks
Each demo will take 5 mins, with a Q&A sesh at the end of all the demos.
Speakers:
Tim fromRunPod- 'Deploy AI apps using natural language'
Szymon fromAleph Alpha- 'How to keep your GPU happy'
Andrey fromdstack- 'dstack: Beyond Kubernetes and Slurm'
Scott fromJina AI- 'Jina AI: Feature-rich enterprise-ready models'
Andrey fromQdrant- 'miniCOIL: a new model for Sparse Neural Retrieval'
Tanja fromRasa- 'Keep CALM and Generate Commands: A New Dialogue Understanding Architecture'
Roman fromNixiesearch- 'Serving a 9B embedding model without losing your mind'
David fromLambda- 'Winning the ARC prize 2024'
Anton fromTokalon.ai- 'Budget-friendly image generation'
Piotr fromAleph Alpha- 'Prompt case vs token by token decodin'
Artem fromJetBrains- 'Fast track for data synchronization in AI'
Mish fromE2B- 'Building an open-source computer use agent'
Vedant fromAleph Alpha- 'Steering vectors'
Mathis fromdeepset- 'Building open source agents with Haystack'
Tejas fromDataStax- 'Build your own local-first, local-only AI workflow'
Jay fromTextCortex- 'Saving your sanity and money with pgvector'
About dstack:
dstack is an open-source alternative to Kubernetes and Slurm, simplifying AI development and deployment across clouds and on-prem with support for NVIDIA, AMD, and TPU hardware.
Thanks to RunPod, Aleph Alpha, and Tensor Ventures for supporting this event.