SolarKnock Reporting API
Pull every knock, lead, appointment, and stat from your organization into your own data warehouse, CRM, or BI tool. One header, one endpoint per resource, cursor-paginated, incremental-sync friendly.
Overview
The Reporting API exposes the source data behind every screen in SolarKnock — the same rows that power your leaderboard, your stats sheet, your manager dashboard, and your lead pipeline. You get the records, not the aggregates, so your warehouse can roll them up however you like.
Base URL: https://app.solarknock.com/api/v1/reporting
Every endpoint is read-only. You cannot create, modify, or delete data through this surface — for that, use the SolarKnock app or the in-app endpoints. This API exists to get data out.
Authentication
Your manager invite code is your API key. Find it in the app under Profile → Manage Members → Manager Code. Send it as a Bearer token on every request:
Authorization: Bearer sk_BHMGR07
The sk_ prefix is optional — bare codes work too. Each key is scoped to exactly one organization; there is no ?orgId= parameter on any endpoint, ever.
Anyone with your manager code can read every knock, lead, and homeowner contact in your organization. Never embed it in client-side code. Rotate it from the app at any time — old keys stop working immediately.
Making Requests
Every list endpoint accepts the same five standard parameters. Endpoint-specific filters (like status on leads) are documented per endpoint.
| Parameter | Type | Default | Description |
|---|---|---|---|
since | ISO 8601 | — | Return rows where updated_at > since. Use this for incremental sync. |
until | ISO 8601 | — | Upper bound on updated_at. Use to make a sync window deterministic. |
cursor | string | — | Opaque pagination cursor returned by the previous page. |
limit | integer | 500 | Page size. Maximum 1000. |
include_deleted | boolean | false | Include soft-deleted rows. |
Response envelope
Every list response shares the same shape:
{
"data": [ /* rows for this page */ ],
"pagination": {
"next_cursor": "eyJpZCI6IjAxMjM0NTY3...=",
"has_more": true
},
"meta": {
"org_id": "f7a...",
"fetched_at": "2026-05-01T17:32:14Z",
"row_count": 500
}
}
When has_more is false, next_cursor is null. Your sync loop is simply: while (cursor) { fetch(cursor) }.
Pagination & Incremental Sync
Cursors are stable under concurrent writes — a row inserted mid-sync won't cause the next page to skip a record. The cursor encodes (updated_at, id) and the server returns rows ordered the same way, so ties on updated_at are broken deterministically by id.
For nightly syncs, save the timestamp at which your job started (not finished) and use it as the next run's since. This guarantees no rows fall in the gap between your read and a row's next update.
Errors
Errors come back in a uniform envelope:
{
"error": {
"code": "INVALID_KEY",
"message": "API key not recognized or expired."
}
}
| HTTP | Code | When it happens |
|---|---|---|
401 | UNAUTHENTICATED | Missing Authorization header. |
401 | INVALID_KEY | Header present but no organization matches. |
402 | SUBSCRIPTION_EXPIRED | Your subscription is past due. Update payment to restore access. |
400 | INVALID_CURSOR | Cursor doesn't decode or is from a different endpoint. |
400 | INVALID_PARAM | Bad date format, unknown enum value, etc. |
429 | RATE_LIMITED | You exceeded your rate limit — see headers. |
500 | INTERNAL_ERROR | Something broke on our end. Retry with backoff. |
Rate Limits
The reporting API is rate-limited per organization, not per IP — your integration can run from anywhere.
- Default: 60 requests per minute
- Burst: Up to 120 in any 60-second window
- Every response includes
X-RateLimit-Limit,X-RateLimit-Remaining, andX-RateLimit-Resetheaders
At limit=1000, the default of 60 req/min lets you pull 60,000 rows/minute — more than enough for any nightly sync we've seen.
Endpoints
Returns metadata about your organization — name, tier, subscription status, member count, and your invite codes. Useful as a sanity check that your key resolves to the org you expect.
curl -H "Authorization: Bearer sk_BHMGR07" \ https://app.solarknock.com/api/v1/reporting/org
Every member of your organization with their role, email, profile info, last login time, and closer status. One row per user.
Returned fields: id, name, email, role, is_closer, available_to_close, profile_picture_url, badge_photo_url, created_at, last_login_at, status, joined_at, default_org_id.
Every knock pin in your organization — homeowner name, address, lat/lng, status, notes, disposition, follow-up date, the rep who knocked, and links to any lead or appointment that resulted.
Endpoint-specific filters:
user_id=<uuid>— only this rep's knocksstatus=<value>— e.g.?status=appointmentknock_date_from=YYYY-MM-DD&knock_date_to=YYYY-MM-DD— calendar-day range against the knock date itselfcounty_fips=<5-digit>— pins inside one US county
curl -H "Authorization: Bearer sk_BHMGR07" \ "https://app.solarknock.com/api/v1/reporting/knocks?since=2026-04-30T00:00:00Z&limit=1000"
Every lead in your organization — name, contact info, monthly bill, roof type, interest level, status, score, notes, system-size estimate, savings projection, and attachments.
Endpoint-specific filters:
status=<value>—new,contacted,appointment,won,lost, etc.user_id=<uuid>— only this rep's leadscreated_from=YYYY-MM-DD&created_to=YYYY-MM-DD— calendar-day range against creation
The attachments field is a JSON array of {filename, url, uploaded_at}. URLs are signed and valid for 1 hour from meta.fetched_at.
Every appointment your team has set — date, time, type, status, the closer assigned, the setter who set it, and the lead/pin it relates to. Both Google Calendar event IDs are returned for reps who have calendar sync enabled.
Endpoint-specific filters:
closer_id=<uuid>/setter_id=<uuid>status=<value>—pending,completed,cancelled, etc.date_from=YYYY-MM-DD&date_to=YYYY-MM-DD— appointment's own date
The raw rows that power your leaderboard. One row per (user_id, date) with knock count, leads generated, appointments set, deals closed, hours worked, and the timestamps of each individual knock that day.
Endpoint-specific filters:
user_id=<uuid>date_from=YYYY-MM-DD&date_to=YYYY-MM-DD
One row per (user, day) where the user opened the SolarKnock app. Useful for "active reps" calculations and attendance analytics.
Every US county your organization has knocked in, with renter-data availability status and a count of knocks per county.
Recipe: Full Warehouse Pull
Run once when you set up the integration. Pulls every row from every endpoint into your warehouse.
const API_KEY = process.env.SOLARKNOCK_KEY; const BASE = "https://app.solarknock.com/api/v1/reporting"; async function pullAll(endpoint) { let cursor = null, all = []; do { const url = `${BASE}/${endpoint}?limit=1000${cursor ? `&cursor=${cursor}` : ""}`; const resp = await fetch(url, { headers: { Authorization: `Bearer ${API_KEY}` }}); const body = await resp.json(); all.push(...body.data); cursor = body.pagination.next_cursor; } while (cursor); return all; } for (const ep of ["users", "knocks", "leads", "appointments", "daily-stats"]) { const rows = await pullAll(ep); // upsert into your warehouse / CSV / etc console.log(`${ep}: ${rows.length} rows`); }
Recipe: Nightly Incremental Sync
Save the timestamp at which the job started, use it as the next run's since. The diff is small, the cursor pages through it, and you never miss a row even if updates land mid-sync.
import os, requests, datetime as dt from pathlib import Path API_KEY = os.environ["SOLARKNOCK_KEY"] STATE = Path("last_run.txt") BASE = "https://app.solarknock.com/api/v1/reporting" since = STATE.read_text().strip() if STATE.exists() else "1970-01-01T00:00:00Z" started = dt.datetime.utcnow().isoformat() + "Z" def pull(endpoint): cursor = None while True: params = {"since": since, "limit": 1000} if cursor: params["cursor"] = cursor r = requests.get(f"{BASE}/{endpoint}", params=params, headers={"Authorization": f"Bearer {API_KEY}"}) body = r.json() yield from body["data"] cursor = body["pagination"]["next_cursor"] if not cursor: break for endpoint in ["knocks", "leads", "appointments", "daily-stats"]: for row in pull(endpoint): # upsert(endpoint, row) pass STATE.write_text(started) # save for next run
Recipe: Connect to Your BI Tool
Most BI tools (Looker, Metabase, Tableau, Hex, Mode) can hit a paginated REST endpoint directly, but they're happiest with a flat table they own. The pattern that scales:
- Stand up a small daily job (Airflow, GitHub Actions, cron) that runs the Python recipe above.
- Land the rows in Postgres / Snowflake / BigQuery / DuckDB / a parquet file in S3.
- Point your BI tool at the warehouse, not at this API directly.
This way your dashboards stay fast (no API round-trips per query), and you can join SolarKnock data with your own CRM, payroll, and financial data.
Email support@solarknock.com. We'll help you scope out the warehouse setup and answer schema questions.