Amagi is a proactive AI assistant that sees your screen, listens, remembers, and helps you stay focused—designed to run across devices with real-time context awareness." Long Description : " Amagi is not just another chatbot—it’s a context-aware, proactive AI assistant built to live with you across devices. Inspired by the chaos of modern digital life, Amagi observes your screen activity, listens to your voice, and keeps track of your context to act before you even ask. The vision was ambitious: A floating UI widget that lives on your screen Groq-powered LLaMA 4 model for both text and image reasoning Whisper-based STT for natural voice interaction Memory storage using vector databases for context recall Cross-device session management with Google login Real-time screen summary uploads for proactive suggestions Every part of the stack was handcrafted—from using FastAPI, Qdrant, and Authlib on the backend, to building a minimal floating widget with Tkinter on the client. The idea was to keep core interactions natural and human: "Hey Amagi, remind me about this video." clicks on widget — gets reminded about the anime suggestion from earlier. But execution wasn’t easy. Technical blockers hit hard: torch refused to install due to system limitations Qdrant’s vector filters bugged out OAuth2 verification required multiple rewrites Docker wasn’t even an option on my machine Time ran out And still, I kept building. Even as things broke, I pushed through the stack again and again—rewriting modules, replacing dependencies, switching APIs, debugging threads—only to run into new problems at every turn. Amagi didn’t ship. But it’s real. And it’s coming. This hackathon submission is just the beginning. The architecture is mapped, core components are wired, and the story is being told—because sometimes the biggest breakthrough isn’t in the code, but in not giving up. Amagi may have missed the deadline. But the journey has only started.
Category tags:An AI-driven tool that reviews GitHub pull requests in real-time, providing clear and intelligent code feedback using Groq-accelerated LLaMA models and the BLACKBOX.AI Coding Agent.
innoventors-blackbox-track
Flowrish AI helps students think better, not less. It guides reflection instead of giving answers—strengthening minds, not replacing them. Offline-first on Snapdragon X Elite, with LLaMA 3 locally and Groq online. Because learning should grow you.
42AI Qualcomm Track
An AI-powered shopping assistant built with FastAPI, Groq API (LLaMA models), and Neo4j knowledge graph for personalized e-commerce experiences
Hackcelerate - Prosus Track
A privacy-focused toolkit for real-time screen OCR and audio transcription on any PC, combining universal image text extraction, audio-to-text, and fast local semantic search—powered by Edge AI and Groq API.
Illuminative Lab - Qualcomm Track
An AI-driven tool that reviews GitHub pull requests in real-time, providing clear and intelligent code feedback using Groq-accelerated LLaMA models and the BLACKBOX.AI Coding Agent.
innoventors-blackbox-track
Flowrish AI helps students think better, not less. It guides reflection instead of giving answers—strengthening minds, not replacing them. Offline-first on Snapdragon X Elite, with LLaMA 3 locally and Groq online. Because learning should grow you.
42AI Qualcomm Track
An AI-powered shopping assistant built with FastAPI, Groq API (LLaMA models), and Neo4j knowledge graph for personalized e-commerce experiences
Hackcelerate - Prosus Track
A privacy-focused toolkit for real-time screen OCR and audio transcription on any PC, combining universal image text extraction, audio-to-text, and fast local semantic search—powered by Edge AI and Groq API.
Illuminative Lab - Qualcomm Track