Hi everyone,
Over the weekend, I built AutoMACE at the Nano Banana Hackathon (hosted by Google DeepMind and Cerebral Valley, in collaboration with ElevenLabs and fal on Kaggle).
What AutoMACE does:
-
Generates a JSON storyboard (tone, scenes, CTAs, hashtags) from a simple product brief.
-
Produces on-brand scene visuals using Google’s Nano Banana (Gemini 2.5 Flash Image) model, combined with natural voice-over from ElevenLabs.
-
Outputs a ready-to-post MP4, with cinematic video generation powered by Google Veo, enhanced with branded overlays (logo, text, colors).
Why it matters:
Marketing teams often struggle with long creative cycles and inconsistent brand voice. AutoMACE helps automate this process, allowing faster iteration and ROI improvements.
Tech Stack:
-
Google Gemini 2.5 Flash Image (Nano Banana)
-
Google Veo
-
ElevenLabs (voice synthesis)
GitHub: GitHub - kashyap0729/automace-ai-marketing-content-engine
I’d love your feedback:
What feature would be most valuable to add next?
-
Video transitions
-
Beat-synced captions
-
Chat-based image generation
Thanks in advance for your thoughts!