How to Export and Analyze 13F Holdings Data Outside the Platform

Sarah Mitchell

A practical walkthrough for using 13F Insight exports in spreadsheets and Python, with tier-specific guidance for CSV and JSON workflows.

Why export data if the platform already has charts and tables?

13F Insight is built for fast signal discovery, but offline analysis is still useful when you want to build your own scoring model, keep an audit trail, or combine holdings data with other datasets. Export gives you that flexibility without losing the context you found on-platform.

For example, you can start from a filer page like Berkshire Hathaway’s filer profile, then export holdings into your own workbook to rank position changes with custom thresholds.

Export access by plan

  • Free: no export access
  • Standard: CSV export
  • Pro: CSV + JSON export

If you are building anything repeatable (quarterly trackers, model portfolios, internal dashboards), Pro’s JSON export is usually worth it because it maps cleanly into Python and API-style pipelines.

Where export is available in 13F Insight

  • Holdings tables on filer detail pages
  • Stock holder lists on stock pages like AAPL and MSFT
  • Filer comparison views when you compare managers side by side

A simple rule: export from the exact view you are using for decisions. That keeps your offline file aligned with what you actually saw in the UI.

Step-by-step walkthrough

  1. Open the view you want to analyze. Start on a filer holdings table, a stock’s holder list, or a filer comparison view.
  2. Set your filters first. Quarter, sorting, and scope should be finalized before export so your file matches your on-screen thesis.
  3. Click Export and choose format. Standard users select CSV. Pro users can choose CSV or JSON depending on toolchain.
  4. Name the file with context. Include entity and quarter, for example berkshire-2026Q1-holdings.csv or aapl-holders-2026Q1.json.
  5. Validate totals after download. Before analysis, verify row count and top positions against the page to avoid downstream mistakes.
  6. Analyze with your preferred tool. Spreadsheet for quick slicing; Python for repeatable scoring and quarter-over-quarter diffs.

Spreadsheet workflow that works for most investors

Use CSV when you want speed and visibility:

  • Create a pivot table for exposure by sector, issuer, or filer
  • Add conditional formatting for large percentage changes
  • Track your own watchlist threshold, such as weight change above 1.5 percentage points

Best practice from retail data workflows: keep one tab as raw export and do all calculations in separate tabs. Never overwrite the raw file. That makes your process easier to audit and update next quarter.

Python workflow for Pro users (JSON + CSV)

JSON is useful when you want structured fields without spreadsheet cleanup. A practical setup is:

  • Load JSON into pandas
  • Normalize column names once
  • Run a reusable script to compare current quarter vs prior quarter
  • Export your final signal table back to CSV for review

Data hygiene matters more than model complexity. Add basic checks at load time: null counts, duplicate rows, and outlier flags on value and weight changes.

Real use cases you can run this week

  • Conviction monitor: export two quarters of one filer and rank the biggest weight increases
  • Crowding check: export a stock holder list and compare holder overlap with your watchlist
  • Manager style map: export filer comparison data to separate concentrated managers from diversified allocators

Common mistakes to avoid

  • Exporting before filters are finalized
  • Mixing files from different quarters without clear naming
  • Treating every large position as active conviction without cross-checking context
  • Skipping validation between UI totals and downloaded rows

If you want a stronger interpretation framework before exporting, read more explainers in the Learn hub.

Bottom line

On-platform views help you find signals quickly. Export helps you turn those signals into a repeatable process. Start with CSV if you want fast analysis, move to JSON when you need automation, and keep a clean naming and validation routine every quarter.

Explore all research