userpveldandi
createdJan 20, 1970
karma4
aboutBuilding InferX, a GPU-native runtime that snapshots the full execution state of LLMs so you can hot-swap models like threads.

Obsessed with inference efficiency, cold-start elimination, and agentic infra.

Previously: enterprise software, now deep in AI infra

Say hi on LinkedIn: https://www.linkedin.com/in/prashanth-v-98629b115/