Shared AI memory from ChatGPT

Download your data from search, social, and AI apps, then run context-use locally to turn raw data into rich searchable memories. Use these memories as personal context for all your agents.

Local
Open Source
Portable
../context-use
$

Setup Guide

1

Install

terminal
pip install context-use
# or
uv tool install context-use
2

Quick start

terminal
context-use pipeline --quick 

Quickly extract memories (last 30 days) from your data export. Uses the real-time API — fast for small slices but susceptible to rate limits on large exports.

You must have an export from a supported provider.

3

Full pipeline

terminal
context-use pipeline

Uses the batch API — significantly cheaper and more rate-limit-friendly. Typical runtime: 2–10 minutes. Memories are stored in SQLite and persist across sessions.

4

Explore your memories

terminal
context-use memories list
context-use memories search "hiking trips in 2024"
context-use memories export
5

Personal agent

terminal
context-use agent synthesise
context-use agent profile
context-use agent ask "What topics do I keep coming back to?"

A multi-turn agent that operates over your full memory store.

Run context-use config --help for more options. Configuration is saved at ~/.config/context-use/config.toml.

Data sources

ChatGPT

ChatGPT

Instagram

Instagram

Claude

Claude

Google

Google

Facebook

Facebook

Coming soon
LinkedIn

LinkedIn

Coming soon
TikTok

TikTok

Coming soon
Pinterest

Pinterest

Coming soon
Amazon

Amazon

Coming soon
Booking

Booking

Coming soon

Contribute new integrations to context-use.

Coming soon
Fabric

Sign up to context-use cloud

Join Fabric's beta for managed context-use.

Web consent flows
Regular data sync
Dedicated support

Create your shared AI memory in minutes

Download your data, run context-use locally, and give your agents real context about you.