v1 · Read-only · No SDK required

SolarKnock Reporting API

Pull every knock, lead, appointment, and stat from your organization into your own data warehouse, CRM, or BI tool. One header, one endpoint per resource, cursor-paginated, incremental-sync friendly.

Overview

The Reporting API exposes the source data behind every screen in SolarKnock — the same rows that power your leaderboard, your stats sheet, your manager dashboard, and your lead pipeline. You get the records, not the aggregates, so your warehouse can roll them up however you like.

Base URL: https://app.solarknock.com/api/v1/reporting

Every endpoint is read-only. You cannot create, modify, or delete data through this surface — for that, use the SolarKnock app or the in-app endpoints. This API exists to get data out.

Authentication

Your manager invite code is your API key. Find it in the app under Profile → Manage Members → Manager Code. Send it as a Bearer token on every request:

Authorization Header
Authorization: Bearer sk_BHMGR07

The sk_ prefix is optional — bare codes work too. Each key is scoped to exactly one organization; there is no ?orgId= parameter on any endpoint, ever.

⚠️ Treat your key like a password

Anyone with your manager code can read every knock, lead, and homeowner contact in your organization. Never embed it in client-side code. Rotate it from the app at any time — old keys stop working immediately.

Making Requests

Every list endpoint accepts the same five standard parameters. Endpoint-specific filters (like status on leads) are documented per endpoint.

ParameterTypeDefaultDescription
sinceISO 8601Return rows where updated_at > since. Use this for incremental sync.
untilISO 8601Upper bound on updated_at. Use to make a sync window deterministic.
cursorstringOpaque pagination cursor returned by the previous page.
limitinteger500Page size. Maximum 1000.
include_deletedbooleanfalseInclude soft-deleted rows.

Response envelope

Every list response shares the same shape:

JSON Response
{
  "data": [ /* rows for this page */ ],
  "pagination": {
    "next_cursor": "eyJpZCI6IjAxMjM0NTY3...=",
    "has_more": true
  },
  "meta": {
    "org_id": "f7a...",
    "fetched_at": "2026-05-01T17:32:14Z",
    "row_count": 500
  }
}

When has_more is false, next_cursor is null. Your sync loop is simply: while (cursor) { fetch(cursor) }.

Pagination & Incremental Sync

Cursors are stable under concurrent writes — a row inserted mid-sync won't cause the next page to skip a record. The cursor encodes (updated_at, id) and the server returns rows ordered the same way, so ties on updated_at are broken deterministically by id.

For nightly syncs, save the timestamp at which your job started (not finished) and use it as the next run's since. This guarantees no rows fall in the gap between your read and a row's next update.

Errors

Errors come back in a uniform envelope:

JSON
{
  "error": {
    "code": "INVALID_KEY",
    "message": "API key not recognized or expired."
  }
}
HTTPCodeWhen it happens
401UNAUTHENTICATEDMissing Authorization header.
401INVALID_KEYHeader present but no organization matches.
402SUBSCRIPTION_EXPIREDYour subscription is past due. Update payment to restore access.
400INVALID_CURSORCursor doesn't decode or is from a different endpoint.
400INVALID_PARAMBad date format, unknown enum value, etc.
429RATE_LIMITEDYou exceeded your rate limit — see headers.
500INTERNAL_ERRORSomething broke on our end. Retry with backoff.

Rate Limits

The reporting API is rate-limited per organization, not per IP — your integration can run from anywhere.

At limit=1000, the default of 60 req/min lets you pull 60,000 rows/minute — more than enough for any nightly sync we've seen.

Endpoints

GET /api/v1/reporting/org

Returns metadata about your organization — name, tier, subscription status, member count, and your invite codes. Useful as a sanity check that your key resolves to the org you expect.

curl
curl -H "Authorization: Bearer sk_BHMGR07" \
  https://app.solarknock.com/api/v1/reporting/org
GET /api/v1/reporting/users

Every member of your organization with their role, email, profile info, last login time, and closer status. One row per user.

Returned fields: id, name, email, role, is_closer, available_to_close, profile_picture_url, badge_photo_url, created_at, last_login_at, status, joined_at, default_org_id.

GET /api/v1/reporting/knocks

Every knock pin in your organization — homeowner name, address, lat/lng, status, notes, disposition, follow-up date, the rep who knocked, and links to any lead or appointment that resulted.

Endpoint-specific filters:

curl — incremental pull
curl -H "Authorization: Bearer sk_BHMGR07" \
  "https://app.solarknock.com/api/v1/reporting/knocks?since=2026-04-30T00:00:00Z&limit=1000"
GET /api/v1/reporting/leads

Every lead in your organization — name, contact info, monthly bill, roof type, interest level, status, score, notes, system-size estimate, savings projection, and attachments.

Endpoint-specific filters:

The attachments field is a JSON array of {filename, url, uploaded_at}. URLs are signed and valid for 1 hour from meta.fetched_at.

GET /api/v1/reporting/appointments

Every appointment your team has set — date, time, type, status, the closer assigned, the setter who set it, and the lead/pin it relates to. Both Google Calendar event IDs are returned for reps who have calendar sync enabled.

Endpoint-specific filters:

GET /api/v1/reporting/daily-stats

The raw rows that power your leaderboard. One row per (user_id, date) with knock count, leads generated, appointments set, deals closed, hours worked, and the timestamps of each individual knock that day.

Endpoint-specific filters:

GET /api/v1/reporting/usage-days

One row per (user, day) where the user opened the SolarKnock app. Useful for "active reps" calculations and attendance analytics.

GET /api/v1/reporting/counties

Every US county your organization has knocked in, with renter-data availability status and a count of knocks per county.

Recipe: Full Warehouse Pull

Run once when you set up the integration. Pulls every row from every endpoint into your warehouse.

Node.js
const API_KEY = process.env.SOLARKNOCK_KEY;
const BASE = "https://app.solarknock.com/api/v1/reporting";

async function pullAll(endpoint) {
  let cursor = null, all = [];
  do {
    const url = `${BASE}/${endpoint}?limit=1000${cursor ? `&cursor=${cursor}` : ""}`;
    const resp = await fetch(url, { headers: { Authorization: `Bearer ${API_KEY}` }});
    const body = await resp.json();
    all.push(...body.data);
    cursor = body.pagination.next_cursor;
  } while (cursor);
  return all;
}

for (const ep of ["users", "knocks", "leads", "appointments", "daily-stats"]) {
  const rows = await pullAll(ep);
  // upsert into your warehouse / CSV / etc
  console.log(`${ep}: ${rows.length} rows`);
}

Recipe: Nightly Incremental Sync

Save the timestamp at which the job started, use it as the next run's since. The diff is small, the cursor pages through it, and you never miss a row even if updates land mid-sync.

Python
import os, requests, datetime as dt
from pathlib import Path

API_KEY = os.environ["SOLARKNOCK_KEY"]
STATE   = Path("last_run.txt")
BASE    = "https://app.solarknock.com/api/v1/reporting"

since = STATE.read_text().strip() if STATE.exists() else "1970-01-01T00:00:00Z"
started = dt.datetime.utcnow().isoformat() + "Z"

def pull(endpoint):
    cursor = None
    while True:
        params = {"since": since, "limit": 1000}
        if cursor: params["cursor"] = cursor
        r = requests.get(f"{BASE}/{endpoint}", params=params,
                         headers={"Authorization": f"Bearer {API_KEY}"})
        body = r.json()
        yield from body["data"]
        cursor = body["pagination"]["next_cursor"]
        if not cursor: break

for endpoint in ["knocks", "leads", "appointments", "daily-stats"]:
    for row in pull(endpoint):
        # upsert(endpoint, row)
        pass

STATE.write_text(started)  # save for next run

Recipe: Connect to Your BI Tool

Most BI tools (Looker, Metabase, Tableau, Hex, Mode) can hit a paginated REST endpoint directly, but they're happiest with a flat table they own. The pattern that scales:

  1. Stand up a small daily job (Airflow, GitHub Actions, cron) that runs the Python recipe above.
  2. Land the rows in Postgres / Snowflake / BigQuery / DuckDB / a parquet file in S3.
  3. Point your BI tool at the warehouse, not at this API directly.

This way your dashboards stay fast (no API round-trips per query), and you can join SolarKnock data with your own CRM, payroll, and financial data.

💬 Need help integrating?

Email support@solarknock.com. We'll help you scope out the warehouse setup and answer schema questions.