I am glory, founder MemoriesFormer researcher at Meta, and PhD in computer science from Cambridge University.
🚀 We’re launching today Memories – The world’s first major visual memory model (LVMM). Our purpose? Give AI visual memory like human.
Why visual visual memory?
AI is mostly based on chat today. But humans are not just chat-based-we think, remember and interact with the world. Visual memory is fundamental to how we understand people, places and contexts. This is what we are taking AI.
With MemoriesAi can watch, understand and remember your video content – just like someone. Once your videos are uploaded and configured, you can naturally communicate with them, ask questions, and recover any moment – no need to reproduce anything.
🎥 What can you do Memories
Chat with your video library: Ask questions and immediately get answers from your video content.
Create videos via chat: Our video creator agent (beta) just lets you make video content by chatting.
This is like the Love loved Love Dear to Woodo
No more dragging and trimming
Just a hint, you get your edited video straight
Discover trends and influence: Our video marketer agent (beta) has more than 1 million tectox index so you can:
Discover viral trends,
Identify relevant creators – in seconds.
There is no more scrolling through endless feeds. Just chat and get your need.
Probably harassment for trickyak?
Thank you all for your support!