Shared AI memory from ChatGPT
Download your data from search, social, and AI apps, then run context-use locally to turn raw data into rich searchable memories. Use these memories as personal context for all your agents.
Setup Guide
Install
pip install context-use
# or
uv tool install context-useQuick start
context-use pipeline --quick Quickly extract memories (last 30 days) from your data export. Uses the real-time API — fast for small slices but susceptible to rate limits on large exports.
You must have an export from a supported provider.
Full pipeline
context-use pipelineUses the batch API — significantly cheaper and more rate-limit-friendly. Typical runtime: 2–10 minutes. Memories are stored in SQLite and persist across sessions.
Explore your memories
context-use memories list
context-use memories search "hiking trips in 2024"
context-use memories exportPersonal agent
context-use agent synthesise
context-use agent profile
context-use agent ask "What topics do I keep coming back to?"A multi-turn agent that operates over your full memory store.
Run context-use config --help for more options. Configuration is saved at ~/.config/context-use/config.toml.
Data sources

ChatGPT


Claude




TikTok
Coming soon

Amazon
Coming soon
Booking
Coming soonContribute new integrations to context-use.
.png)
Sign up to context-use cloud
Join Fabric's beta for managed context-use.
Create your shared AI memory in minutes
Download your data, run context-use locally, and give your agents real context about you.