From LMMS to Ramp: Scaling Agents with Compute and AI

Join me as I explore the idea that systems that scale with compute beat those that don't, and how this exponential trend can revolutionize the way we build agents and software.

  • 1. Speaker has been working on Learning Management Systems (LMS) for 4 years, focusing on improving chatbot intelligence.
  • 2. Their work began to take off with the release of ChatGPT, which enabled them to build an AI agent company.
  • 3. Early models were frustratingly stupid and required a lot of custom code to function reliably.
  • 4. As models became smarter, less custom code was needed, revealing patterns in effective agent building.
  • 5. Speaker built a structuring extraction library called Json Former to help models understand JSON data.
  • 6. The core idea the speaker wants to convey is that systems that scale with compute beat systems that don't.
  • 7. Rigid, deterministic systems can be outperformed by more flexible systems that utilize greater computational resources.
  • 8. Exponential growth is rare and valuable; when found, it should be embraced and leveraged.
  • 9. Examples from history show that general methods scale better than handcrafted solutions in chess, go, computer vision, and Atari games.
  • 10. The speaker's company, Ramp, is a finance platform that uses AI to automate various tasks.
  • 11. One system at Ramp is the Switching Report, which helps users transfer transaction data from third-party card providers to Ramp.
  • 12. The original solution for the Switching Report was to manually write code for 50 common third-party card vendors, but this approach has limitations.
  • 13. A more general solution involves using LMs to classify column types and map them to a desired schema.
  • 14. An even more flexible approach is to let an LM interpret the CSV and generate the desired output format, which may require significantly more compute power.
  • 15. The speaker argues that engineer time is more scarce than computational resources, justifying the use of high-compute solutions.
  • 16. Different approaches can be visualized as a spectrum, with classical compute on one end and fuzzy LM (neural networks and matrix multiplication) on the other.
  • 17. Modern systems often involve a mix of both classical and fuzzy LM components.
  • 18. The speaker proposes that future systems may rely more heavily on fuzzy LM, allowing for more adaptability and reducing the need for explicit code.
  • 19. In this model, the backend would be an LM that has access to tools, interpreters, and databases, enabling it to handle requests and generate responses more organically.
  • 20. The speaker demonstrates a mail client that uses an LM as its backend, rendering UI elements based on LM interpretations of user interactions.
  • 21. This approach allows for dynamic, real-time adjustments to the UI without requiring explicit code changes or backend calls.
  • 22. While this technology is still in its infancy, it has the potential to revolutionize software development and deployment.
  • 23. The speaker encourages the audience to consider the possibilities of fuzzy LM in their own projects and to embrace exponential trends in technology.
  • 24. As computational resources become cheaper and more powerful, systems that leverage these trends will likely outperform traditional, rigid systems.

Source: AI Engineer via YouTube

❓ What do you think? What are your thoughts on the ideas shared in this video? Feel free to share your thoughts in the comments!