Optimizing LMS: Prioritizing Solving Real Problems over Toy Projects

As AI engineers, we often struggle to create effective prompts for our models, leading to frustration and wasted time - but what if there was a way to rethink our approach and become "great" AI engineers once again?

  • 1. The speaker suggests that many AI engineers are more focused on tinkering with tools and datasets rather than solving real-world problems.
  • 2. They urge the audience to remember that they are engineers first, and should focus on building larger systems that incorporate language models or other AI models.
  • 3. The excitement around new models can be misleading; it's important to have a clear idea of what you want to achieve before throwing tools at a problem.
  • 4. Prompt engineering is difficult and requires a deep understanding of the domain space.
  • 5. Creating examples and using them to prompt language models can help engineers better understand the production system they are working on.
  • 6. Prompt engineering is like "black magic," and there are many techniques to improve it, such as high-level prompts that instruct the model to output in a specific format (e.g., JSON).
  • 7. Other prompting techniques include chain of thought, zero-shot learning, length control, and tools like ChainGuard and DPy Instructor.
  • 8. Sensitivity to minor prompt changes can lead to significant problems; using chaining libraries and error handling code can help mitigate this issue.
  • 9. Insufficient logging and not measuring performance can hinder progress; open-source libraries like Arise Phoenix can help with debugging and understanding system behavior.
  • 10. Data leakage is still a concern, even when working with cutting-edge tools; it's crucial to ensure that evaluation data sets are prepared carefully.
  • 11. DSP (Data Science Platform) has a steep learning curve but can enable users to manage complex systems more efficiently.
  • 12. Breaking down problems into smaller modules and understanding the flow of the program is essential for making the most of tools like DSP.
  • 13. Different languages may require different metrics or evaluators; in some cases, custom modules need to be created for specific use cases.
  • 14. Hand-written prompts are crucial for understanding if a task is achievable at all.
  • 15. Avoid using vague metrics like "looks good to me" without annotated data or clear goal metrics.
  • 16. Start with basic evaluations, like regex or string comparisons, before moving on to more complex techniques like LMS Judge.
  • 17. Overcomplicating solutions can be counterproductive; make small adjustments instead.
  • 18. The speaker emphasizes the importance of curiosity and first principles thinking for AI engineers.
  • 19. Attendees should find problems where AI can provide unique solutions, not just use tools without a clear purpose.
  • 20. Conferences like this one are meant to inspire and spark curiosity in attendees.
  • 21. The speaker encourages the audience to become great AI engineers by combining tool usage with first principles thinking.
  • 22. Real-world applications of AI, such as AXA Germany's data innovation lab, demonstrate the value of addressing customer needs and focusing on data-driven solutions.
  • 23. Climate change and increasing claims are driving the need for better data management in the insurance industry.
  • 24. Collaboration between machine learning engineers, data scientists, and business units can lead to significant improvements in customer service and agent efficiency.

Source: AI Engineer via YouTube

❓ What do you think? What's the most significant obstacle preventing you from becoming a "great" AI Engineer, and how do you envision overcoming it to achieve your goals? Feel free to share your thoughts in the comments!