Jun's Vision: Augmenting Human Capabilities Through Next-Gen AI Interfaces
Join me, Jun, founding engineer at Tusk, as I share my thoughts on building the next generation of AI interfaces that put humans in the center, augmenting our capabilities, and helping us be more thoughtful and creative.
- 1. Jun is a founding engineer at Tusk and will discuss building the next generation of AI interfaces.
- 2. The focus is on AI systems that put humans in the center, augmenting capabilities and helping with thoughtfulness and creativity.
- 3. It is a collection of ideas Jun has been thinking about, some more speculative, to encourage builders in the space to consider these patterns and principles.
- 4. In 2025, agents are expected to be everywhere, performing research, browsing for users, and automating tasks.
- 5. Agent-based tooling and protocols are becoming more sophisticated, with large chunks of knowledge work anticipated to be automated in the future.
- 6. Many AI agents focus on automating discrete tasks, which is easier to quantify, sell, benchmark, and compare one system against another.
- 7. Over-reliance on automation can lead to general laziness and atrophy of skills; many high-judgment domains like coding and design still require tight human supervision.
- 8. The main thesis of the talk is to help humans produce high-quality work instead of attempting to automate complex tasks suboptimally.
- 9. Introducing ideas for augmentation-based UX, looking at interaction patterns for AI helping users review blind spots, spark creativity, and amplify thoughtful decision-making.
- 10. Principles for designing AI products to emphasize and grow human capabilities and trustworthy human-AI partnerships will be discussed.
- 11. Comparing automation and augmentation approaches, with automation writing the entire email or code and sending it on behalf of the user, while augmentation helps users brainstorm key points, sugge
- 12. In augmentation, the human is still in control, with an AI thinking partner reviewing work and suggesting improvements.
- 13. The shift in mindset moves responsibility for the task to the AI system, transforming it from an offshore contractor to a team member that grows together with the user.
- 14. The first core interaction pattern is blind spot detection, which is immediately compelling because everyone has blind spots in their thinking.
- 15. AI should be designed to reveal something at the right time when a unit of work can be assumed to be ready.
- 16. Task, an AI testing platform, finds edge cases and bugs based on pull requests; it validates potential issues and surfaces them for review.
- 17. Users can provide feedback by clicking thumbs up or thumbs down or explaining their reasoning, helping the AI learn from user reviews.
- 18. The second pattern is cognitive partnership, moving from stateless answering machines to systems that adapt to a user's mental models.
- 19. Building personalization without being creepy is essential; users need to feel understood but not surveyed.
- 20. Proactive guidance, the third pattern, is the hardest to get right; it should feel like serendipity and not interruption.
- 21. To build trust in augmentation systems, trust must be progressive, contextual, and bidirectional.
- 22. Trust should be built with low-stake suggestions before moving to high-impact decisions, allowing a system to prove itself on small things first.
- 23. AI should facilitate skill growth, visualize skill development, evolve with the user, and create an emotional connection with users.
- 24. The future of AI interfaces should focus on becoming more fully human, amplifying intuition, taste, and creativity rather than replacing human capabilities.
Source: AI Engineer via YouTube
❓ What do you think? What are the most critical aspects of building trustworthy AI-human partnerships, and how can we prioritize these elements in designing augmentation systems? Feel free to share your thoughts in the comments!