Exploring Ghost Pilot: Outsourcing Boring Tasks to AI for Efficient Code Development
Join me, Gajan Patel, Director of Engineering at PA Alter Networks, as I explore the concept of 'selfish' evolving code and introduce my side project that aims to outsource boring tasks to CI/CD pipelines using AI-powered tools.
- 1. Gjan Patel, Director of Engineering at PagerDuty, talks about "Selfish Evolving Code."
- 2. He is not promoting any product; it's a side project he has been exploring.
- 3. The talk is dedicated to his close friend Nikil who recently passed away.
- 4. Gjan discusses the importance of staying in the 'Flow State' when coding.
- 5. Writing code is a small portion of the entire software development flow.
- 6. He introduces the concept of "Ghost Pilot," an offline system that can help with deliberate thinking and context awareness.
- 7. Ghost Pilot differs from co-pilots like Sourcegraph or GitHub Co-Pilot, which focus on quick, in-time decisions.
- 8. Code Reviewers need time for iteration and reflection, considering the full context of the code.
- 9. Gjan presents a high-level developer workflow with four steps: improving variable names and code comments, adding unit tests, identifying security issues, and fixing them.
- 10. Improving variable names and code comments helps utilize the knowledge beyond coding, like art or human behavior.
- 11. Clear code comments ensure that AI tools understand the intended behavior of the code block.
- 12. Adding unit tests early on sets a baseline behavior for the code.
- 13. Identifying security issues based on environmental context and fixing them ensures better code quality.
- 14. The talk emphasizes using AI tools like Large Language Models (LLMs) to assist in these steps.
- 15. Using LLMs can help discover corner cases, set baselines, personalize code, and generate unit tests.
- 16. Gjan discusses the importance of context in CI/CD pipelines, including cloud provider, service location, data privacy, application type, networking PRD, and more.
- 17. AI-infused context helps prioritize issues based on historical bugs and company policies.
- 18. The talk introduces an example of simulating three AI employees - a red team engineer, a Python developer, and an engineering manager - to debate security issues and prioritize fixes.
- 19. Gjan provides an example of how LLMs can identify logical issues that other SAST tools might miss, like a Kubernetes bug in the Go code.
- 20. The talk emphasizes using AI suggestions as starting points for human reviewers to make final decisions on which issues to fix.
- 21. The human reviewer should consider the conversation, security policy citations, and context before making a decision.
- 22. Gjan encourages running unit tests early to establish a baseline and save time in code reviews.
- 23. He will share his GitHub repository, GitLab CI, and GitHub Actions files for further exploration.
- 24. The talk is concluded with a thank you to the audience and Nikil for his enthusiasm about the conference.
Source: AI Engineer via YouTube
❓ What do you think? What are your thoughts on the ideas shared in this video? Feel free to share your thoughts in the comments!