Compare commits
8 Commits
main
...
deployment
Author | SHA1 | Date | |
---|---|---|---|
5dfc710ae9 | |||
5067913c21 | |||
b9fad16fa2 | |||
0ff358552c | |||
720c7e0b52 | |||
fb7428064f | |||
7ff3f43c93 | |||
82eed71d7e |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -23,3 +23,4 @@ data/
|
|||||||
# Env/config overrides
|
# Env/config overrides
|
||||||
shard.yaml
|
shard.yaml
|
||||||
.env
|
.env
|
||||||
|
testdata/*
|
@@ -31,7 +31,7 @@ FROM gcr.io/distroless/base-debian12:nonroot
|
|||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
COPY --from=build /out/greencoast-shard /app/greencoast-shard
|
COPY --from=build /out/greencoast-shard /app/greencoast-shard
|
||||||
COPY configs/shard.sample.yaml /app/shard.yaml
|
COPY configs/shard.sample.yaml /app/shard.yaml
|
||||||
COPY client /app/client
|
COPY client/ /opt/greencoast/client/
|
||||||
VOLUME ["/var/lib/greencoast"]
|
VOLUME ["/var/lib/greencoast"]
|
||||||
EXPOSE 8080 8081 8443 9443
|
EXPOSE 8080 8081 8443 9443
|
||||||
USER nonroot:nonroot
|
USER nonroot:nonroot
|
||||||
|
236
README.md
236
README.md
@@ -1,24 +1,224 @@
|
|||||||
# GreenCoast — Privacy-First, Shardable Social (Dockerized)
|
# GreenCoast
|
||||||
|
|
||||||
**Goal:** A BlueSky-like experience with **shards**, **zero-trust**, **no data collection**, **E2EE**, and easy self-hosting — from x86_64 down to **Raspberry Pi Zero**.
|
A privacy-first, shardable social backend + minimalist client. **Zero PII**, **zero passwords**, optional **E2EE per post**, and **public-key accounts**. Includes **DPoP-style proof-of-possession**, **Discord SSO with PKCE**, and a tiny static client.
|
||||||
License: **The Unlicense** (public-domain equivalent).
|
|
||||||
|
|
||||||
This repo contains a minimal, working **shard**: an append-only object API with zero-data-collection defaults. It’s structured to evolve into full federation, E2EE, and client apps, while keeping Pi Zero as a supported host.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Quick Start (Laptop / Dev)
|
## Features
|
||||||
|
|
||||||
**Requirements:** Docker + Compose v2
|
- **Zero-trust by design**: server stores no emails or passwords.
|
||||||
|
- **Accounts = public keys** (Ed25519 or P-256). No usernames required.
|
||||||
|
- **Proof-of-possession (PoP)** on every authenticated API call.
|
||||||
|
- **Short-lived tokens** (HMAC “gc2”) bound to device keys.
|
||||||
|
- **Shardable storage** (mTLS or signed shard requests).
|
||||||
|
- **No fingerprinting**: no IP/UA logs; coarse timestamps optional.
|
||||||
|
- **Static client** with strong CSP; optional E2EE per post.
|
||||||
|
- **Discord SSO (PKCE)** as an *optional* convenience.
|
||||||
|
- **Filesystem storage** supports both **flat** and **nested** object layouts.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture (brief)
|
||||||
|
|
||||||
|
- **Shard**: stateless API + local FS object store + in-memory index.
|
||||||
|
- **Client**: static files (HTML/JS/CSS) served by the shard or any static host.
|
||||||
|
- **Identity**: device key (P-256/Ed25519) or passkey; server mints short-lived **gc2** tokens bound to the device key (`cnf` claim).
|
||||||
|
- **Privacy**: objects can be plaintext (public) or client-encrypted (private).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Security posture
|
||||||
|
|
||||||
|
- **Zero-trust**: no passwords/emails; optional SSO is *linking*, not source-of-truth.
|
||||||
|
- **DPoP-style PoP** on requests:
|
||||||
|
- Client sends:
|
||||||
|
- `Authorization: Bearer gc2.…`
|
||||||
|
- `X-GC-Key: p256:<base64-raw>` (or `ed25519:…`)
|
||||||
|
- `X-GC-TS: <unix seconds>`
|
||||||
|
- `X-GC-Proof: sig( METHOD "\n" URL "\n" TS "\n" SHA256(body) )`
|
||||||
|
- Server verifies `gc2` signature, key binding (`cnf`), timestamp window, and replay cache.
|
||||||
|
- **Replay protection**: 10-minute proof cache.
|
||||||
|
- **No fingerprinting/logging**: no IPs, no UAs.
|
||||||
|
- **Strict CSP** for client: blocks XSS/token theft.
|
||||||
|
- **Limits**: request body limits (default 10 MiB), simple per-account rate limiting.
|
||||||
|
- **Shard↔shard**: mTLS or per-shard signatures with timestamp + replay cache.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Go 1.21+
|
||||||
|
- Docker (optional)
|
||||||
|
- A signing key for tokens: `GC_SIGNING_SECRET_HEX` (32+ bytes hex)
|
||||||
|
- (Optional) Discord OAuth app (Client ID/Secret + redirect URI)
|
||||||
|
- (Optional) Cloudflare Tunnel or other TLS reverse proxy
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment variables
|
||||||
|
|
||||||
|
GC_HTTP_ADDR=:9080
|
||||||
|
GC_HTTPS_ADDR= # optional
|
||||||
|
GC_TLS_CERT= # optional
|
||||||
|
GC_TLS_KEY= # optional
|
||||||
|
|
||||||
|
GC_STATIC_ADDR=:9082
|
||||||
|
GC_STATIC_DIR=/opt/greencoast/client
|
||||||
|
|
||||||
|
GC_DATA_DIR=/var/lib/greencoast
|
||||||
|
GC_ZERO_TRUST=true
|
||||||
|
GC_COARSE_TS=false
|
||||||
|
|
||||||
|
GC_SIGNING_SECRET_HEX=<64+ hex chars> # required for gc2 tokens
|
||||||
|
GC_REQUIRE_POP=true # default true; set false for first-run
|
||||||
|
|
||||||
|
# Dev convenience (testing only; disable for production)
|
||||||
|
GC_DEV_ALLOW_UNAUTH=false
|
||||||
|
GC_DEV_BEARER=
|
||||||
|
|
||||||
|
# Discord SSO (optional)
|
||||||
|
GC_DISCORD_CLIENT_ID=
|
||||||
|
GC_DISCORD_CLIENT_SECRET=
|
||||||
|
GC_DISCORD_REDIRECT_URI=https://greencoast.example.com/auth-callback.html
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quickstart (Docker)
|
||||||
|
|
||||||
|
Minimal compose for local testing (PoP disabled + dev unauth allowed for first run):
|
||||||
|
|
||||||
|
services:
|
||||||
|
shard-test:
|
||||||
|
build: .
|
||||||
|
environment:
|
||||||
|
- GC_HTTP_ADDR=:9080
|
||||||
|
- GC_STATIC_ADDR=:9082
|
||||||
|
- GC_STATIC_DIR=/opt/greencoast/client
|
||||||
|
- GC_DATA_DIR=/var/lib/greencoast
|
||||||
|
- GC_ZERO_TRUST=true
|
||||||
|
- GC_SIGNING_SECRET_HEX=7f6e1a0f2b4d7e3a... # replace with your secret
|
||||||
|
- GC_REQUIRE_POP=false # easier first-run
|
||||||
|
- GC_DEV_ALLOW_UNAUTH=true
|
||||||
|
volumes:
|
||||||
|
- ./testdata:/var/lib/greencoast
|
||||||
|
- ./client:/opt/greencoast/client:ro
|
||||||
|
ports:
|
||||||
|
- "9080:9080"
|
||||||
|
- "9082:9082"
|
||||||
|
|
||||||
|
Open `http://localhost:9082` → set the Shard URL (`http://localhost:9080`) → publish a test post.
|
||||||
|
|
||||||
|
When ready, **turn PoP on** by removing `GC_REQUIRE_POP=false` and disabling `GC_DEV_ALLOW_UNAUTH`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cloudflare Tunnel example
|
||||||
|
|
||||||
|
ingress:
|
||||||
|
- hostname: greencoast.example.com
|
||||||
|
service: http://shard-test:9082
|
||||||
|
- hostname: api-gc.greencoast.example.com
|
||||||
|
service: http://shard-test:9080
|
||||||
|
- service: http_status:404
|
||||||
|
|
||||||
|
Use “Full (strict)” TLS and ensure your cert covers both hosts.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Client usage
|
||||||
|
|
||||||
|
- **Shard URL**: set it in the top “Connect” section (or use `?api=` query or `<meta name="gc-api-base">`).
|
||||||
|
- **Device key sign-in (no OAuth)**:
|
||||||
|
1) Client generates/stores a P-256 device key in the browser.
|
||||||
|
2) Client calls `/v1/auth/key/challenge` then `/v1/auth/key/verify` to obtain a **gc2** token bound to that key.
|
||||||
|
- **Discord SSO (optional)**:
|
||||||
|
- Requires `GC_DISCORD_CLIENT_*` env vars and a valid `GC_DISCORD_REDIRECT_URI`.
|
||||||
|
- Uses PKCE (`S256`) and binds the minted **gc2** token to the device key presented at `/start`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API (overview)
|
||||||
|
|
||||||
|
- `GET /healthz` – liveness
|
||||||
|
- `PUT /v1/object` – upload blob (headers: optional `X-GC-Private: 1`, `X-GC-TZ`)
|
||||||
|
- `GET /v1/object/{hash}` – download blob
|
||||||
|
- `DELETE /v1/object/{hash}` – delete blob
|
||||||
|
- `GET /v1/index` – list indexed entries (latest first)
|
||||||
|
- `GET /v1/index/stream` – SSE updates
|
||||||
|
- `POST /v1/admin/reindex` – rebuild index from disk
|
||||||
|
- **Auth**
|
||||||
|
- `POST /v1/auth/key/challenge` → `{nonce, exp}`
|
||||||
|
- `POST /v1/auth/key/verify` `{nonce, alg, pub, sig}` → `{bearer, sub, exp}`
|
||||||
|
- `POST /v1/auth/discord/start` (requires `X-GC-3P-Assent: 1` and `X-GC-Key`)
|
||||||
|
- `GET /v1/auth/discord/callback` → redirects with `#bearer=…`
|
||||||
|
- **GDPR**
|
||||||
|
- `GET /v1/gdpr/policy` – current data-handling posture
|
||||||
|
|
||||||
|
> When `GC_REQUIRE_POP=true`, all authenticated endpoints require PoP headers.
|
||||||
|
|
||||||
|
### PoP header format (pseudocode)
|
||||||
|
|
||||||
|
Authorization: Bearer gc2.<claims>.<sig>
|
||||||
|
X-GC-Key: p256:<base64-raw> # or ed25519:<base64-raw>
|
||||||
|
X-GC-TS: <unix seconds>
|
||||||
|
X-GC-Proof: base64(
|
||||||
|
Sign_device_key(
|
||||||
|
UPPER(METHOD) + "\n" + URL + "\n" + X-GC-TS + "\n" + SHA256(body)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Storage layout & migration
|
||||||
|
|
||||||
|
- **Writes** are flat: `objects/<hash>`
|
||||||
|
- **Reads** (and reindex) also support:
|
||||||
|
- `objects/<hash>/blob|data|content`
|
||||||
|
- `objects/<hash>/<single file>`
|
||||||
|
- `objects/<prefix>/<hash>` (two-level prefix)
|
||||||
|
- To **restore** data into a fresh container:
|
||||||
|
1) Mount your objects at `/var/lib/greencoast/objects`
|
||||||
|
2) Call `POST /v1/admin/reindex` (with auth+PoP or enable dev unauth briefly)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reindex examples
|
||||||
|
|
||||||
|
Unauth (dev only):
|
||||||
|
|
||||||
|
curl -X POST https://api-gc.yourdomain/v1/admin/reindex
|
||||||
|
|
||||||
|
With bearer + PoP (placeholders):
|
||||||
|
|
||||||
|
curl -X POST https://api-gc.yourdomain/v1/admin/reindex ^
|
||||||
|
-H "Authorization: Bearer <gc2_token>" ^
|
||||||
|
-H "X-GC-Key: p256:<base64raw>" ^
|
||||||
|
-H "X-GC-TS: <unix>" ^
|
||||||
|
-H "X-GC-Proof: <base64sig>"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hardening checklist (prod)
|
||||||
|
|
||||||
|
- Set `GC_REQUIRE_POP=true`, remove dev bypass.
|
||||||
|
- Keep access token TTL ≤ 8h; rotate signing key periodically.
|
||||||
|
- Static client served with strong CSP (already enabled).
|
||||||
|
- Containers run non-root, read-only FS, `no-new-privileges`, `cap_drop: ["ALL"]`.
|
||||||
|
- Edge WAF/rate limits; 10 MiB default request cap (tunable).
|
||||||
|
- Commit `go.sum`; run `go mod verify` in CI.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## GDPR
|
||||||
|
|
||||||
|
- Server stores **no PII** (no emails, no IP/UA logs).
|
||||||
|
- Timestamps are UTC (or coarse UTC if enabled).
|
||||||
|
- `/v1/gdpr/policy` exposes current posture.
|
||||||
|
- Roadmap: `/v1/gdpr/export` and `/v1/gdpr/delete` to enumerate/remove blobs signed by a given key.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This project is licensed under **The Unlicense**. See `LICENSE` for details.
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone <your repo> greencoast
|
|
||||||
cd greencoast
|
|
||||||
cp .env.example .env
|
|
||||||
docker compose -f docker-compose.dev.yml up --build
|
|
||||||
# Health:
|
|
||||||
curl -s http://localhost:8080/healthz
|
|
||||||
# Put an object (dev mode allows unauthenticated PUT/GET):
|
|
||||||
curl -s -X PUT --data-binary @README.md http://localhost:8080/v1/object
|
|
||||||
# -> {"ok":true,"hash":"<sha256>",...}
|
|
||||||
curl -s http://localhost:8080/v1/object/<sha256> | head
|
|
||||||
|
309
client/app.js
309
client/app.js
@@ -1,10 +1,13 @@
|
|||||||
import { encryptString, decryptToString, toBlob } from "./crypto.js";
|
import { encryptString, decryptToString, toBlob } from "./crypto.js";
|
||||||
|
|
||||||
|
// ---------- DOM ----------
|
||||||
const els = {
|
const els = {
|
||||||
shardUrl: document.getElementById("shardUrl"),
|
shardUrl: document.getElementById("shardUrl"),
|
||||||
bearer: document.getElementById("bearer"),
|
bearer: document.getElementById("bearer"),
|
||||||
passphrase: document.getElementById("passphrase"),
|
passphrase: document.getElementById("passphrase"),
|
||||||
saveConn: document.getElementById("saveConn"),
|
saveConn: document.getElementById("saveConn"),
|
||||||
|
keySignIn: document.getElementById("keySignIn"),
|
||||||
|
panicWipe: document.getElementById("panicWipe"),
|
||||||
health: document.getElementById("health"),
|
health: document.getElementById("health"),
|
||||||
visibility: document.getElementById("visibility"),
|
visibility: document.getElementById("visibility"),
|
||||||
title: document.getElementById("title"),
|
title: document.getElementById("title"),
|
||||||
@@ -15,110 +18,256 @@ const els = {
|
|||||||
discordStart: document.getElementById("discordStart"),
|
discordStart: document.getElementById("discordStart"),
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// ---------- Config (no bearer in localStorage) ----------
|
||||||
const LS_KEY = "gc_client_config_v1";
|
const LS_KEY = "gc_client_config_v1";
|
||||||
const POSTS_KEY = "gc_posts_index_v1";
|
const POSTS_KEY = "gc_posts_index_v1";
|
||||||
|
|
||||||
const cfg = loadConfig(); applyConfig(); checkHealth(); syncIndex(); sse();
|
|
||||||
|
|
||||||
els.saveConn.onclick = async () => {
|
|
||||||
const c = { url: norm(els.shardUrl.value), bearer: els.bearer.value.trim(), passphrase: els.passphrase.value };
|
|
||||||
saveConfig(c); await checkHealth(); await syncIndex(); sse(true);
|
|
||||||
};
|
|
||||||
|
|
||||||
els.publish.onclick = publish;
|
|
||||||
els.discordStart.onclick = discordStart;
|
|
||||||
|
|
||||||
function loadConfig(){ try { return JSON.parse(localStorage.getItem(LS_KEY)) ?? {}; } catch { return {}; } }
|
function loadConfig(){ try { return JSON.parse(localStorage.getItem(LS_KEY)) ?? {}; } catch { return {}; } }
|
||||||
function saveConfig(c){ localStorage.setItem(LS_KEY, JSON.stringify(c)); Object.assign(cfg, c); }
|
function saveConfig(c){ localStorage.setItem(LS_KEY, JSON.stringify({ url: c.url, passphrase: c.passphrase })); Object.assign(cfg, c); }
|
||||||
function getPosts(){ try { return JSON.parse(localStorage.getItem(POSTS_KEY)) ?? []; } catch { return []; } }
|
function getPosts(){ try { return JSON.parse(localStorage.getItem(POSTS_KEY)) ?? []; } catch { return []; } }
|
||||||
function setPosts(v){ localStorage.setItem(POSTS_KEY, JSON.stringify(v)); renderPosts(); }
|
function setPosts(v){ localStorage.setItem(POSTS_KEY, JSON.stringify(v)); renderPosts(); }
|
||||||
function norm(u){ return (u||"").replace(/\/+$/,""); }
|
function norm(u){ return (u||"").replace(/\/+$/,""); }
|
||||||
function applyConfig(){ els.shardUrl.value = cfg.url ?? location.origin; els.bearer.value = cfg.bearer ?? ""; els.passphrase.value = cfg.passphrase ?? ""; }
|
function getBearer(){ return sessionStorage.getItem("gc_bearer") || ""; }
|
||||||
|
function setBearer(tok){ if (!tok) sessionStorage.removeItem("gc_bearer"); else sessionStorage.setItem("gc_bearer", tok); els.bearer.value = tok ? "••• (session)" : ""; }
|
||||||
|
const cfg = loadConfig();
|
||||||
|
|
||||||
|
// ---------- Security helpers ----------
|
||||||
|
const enc = new TextEncoder();
|
||||||
|
const dec = new TextDecoder();
|
||||||
|
const b64 = (u) => { let s=""; u=new Uint8Array(u); for (let i=0;i<u.length;i++) s+=String.fromCharCode(u[i]); return btoa(s).replace(/\+/g,"-").replace(/\//g,"_").replace(/=+$/,""); };
|
||||||
|
const ub64 = (s) => { s=s.replace(/-/g,"+").replace(/_/g,"/"); while(s.length%4) s+="="; const bin=atob(s); const b=new Uint8Array(bin.length); for(let i=0;i<bin.length;i++) b[i]=bin.charCodeAt(i); return b.buffer; };
|
||||||
|
async function sha256Hex(buf){ const h = await crypto.subtle.digest("SHA-256", buf); return [...new Uint8Array(h)].map(x=>x.toString(16).padStart(2,"0")).join(""); }
|
||||||
|
|
||||||
|
// Device key (P-256), stored locally (not a bearer)
|
||||||
|
async function getDevice() {
|
||||||
|
let dev = JSON.parse(localStorage.getItem('gc_device_key_v1')||'null');
|
||||||
|
if (!dev) {
|
||||||
|
const kp = await crypto.subtle.generateKey({name:"ECDSA", namedCurve:"P-256"}, true, ["sign","verify"]);
|
||||||
|
const pkcs8 = await crypto.subtle.exportKey("pkcs8", kp.privateKey);
|
||||||
|
const rawPub = await crypto.subtle.exportKey("raw", kp.publicKey); // 65B 0x04||X||Y
|
||||||
|
dev = { alg:"p256", priv: b64(pkcs8), pub: b64(rawPub) };
|
||||||
|
localStorage.setItem('gc_device_key_v1', JSON.stringify(dev));
|
||||||
|
}
|
||||||
|
return dev;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Proof-of-Possession headers for this request
|
||||||
|
async function popHeaders(method, pathOnly, bodyBuf){
|
||||||
|
const dev = await getDevice();
|
||||||
|
const ts = Math.floor(Date.now()/1000).toString();
|
||||||
|
const hashHex = await sha256Hex(bodyBuf || new Uint8Array());
|
||||||
|
const msg = enc.encode(method.toUpperCase()+"\n"+pathOnly+"\n"+ts+"\n"+hashHex);
|
||||||
|
const priv = await crypto.subtle.importKey("pkcs8", ub64(dev.priv), { name:"ECDSA", namedCurve:"P-256" }, false, ["sign"]);
|
||||||
|
const sig = await crypto.subtle.sign({ name:"ECDSA", hash:"SHA-256" }, priv, msg);
|
||||||
|
return {
|
||||||
|
"X-GC-Key": "p256:"+dev.pub,
|
||||||
|
"X-GC-TS": ts,
|
||||||
|
"X-GC-Proof": b64(sig),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Idle timeout → clear bearer
|
||||||
|
(function idleGuard(){
|
||||||
|
let idle;
|
||||||
|
const bump=()=>{ clearTimeout(idle); idle=setTimeout(()=>setBearer(""), 30*60*1000); }; // 30 min
|
||||||
|
["click","keydown","mousemove","touchstart","focus","visibilitychange"].forEach(ev=>addEventListener(ev,bump,{passive:true}));
|
||||||
|
bump();
|
||||||
|
})();
|
||||||
|
|
||||||
|
// ---------- API base detection ----------
|
||||||
|
function defaultApiBase() {
|
||||||
|
try {
|
||||||
|
const qs = new URLSearchParams(window.location.search);
|
||||||
|
const qApi = qs.get("api"); if (qApi) return qApi.replace(/\/+$/, "");
|
||||||
|
} catch {}
|
||||||
|
const m = document.querySelector('meta[name="gc-api-base"]');
|
||||||
|
if (m && m.content) return m.content.replace(/\/+$/, "");
|
||||||
|
try {
|
||||||
|
const u = new URL(window.location.href);
|
||||||
|
const proto = u.protocol, host = u.hostname, portStr = u.port;
|
||||||
|
const bracketHost = host.includes(":") ? `[${host}]` : host;
|
||||||
|
const port = portStr ? parseInt(portStr, 10) : null;
|
||||||
|
let apiPort = port;
|
||||||
|
if (port === 8082) apiPort = 8080;
|
||||||
|
else if (port === 9082) apiPort = 9080;
|
||||||
|
else if (port) apiPort = Math.max(1, port - 2);
|
||||||
|
return apiPort ? `${proto}//${bracketHost}:${apiPort}` : `${proto}//${bracketHost}`;
|
||||||
|
} catch {
|
||||||
|
return window.location.origin.replace(/\/+$/, "");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------- App init ----------
|
||||||
|
function applyConfig(){
|
||||||
|
els.shardUrl.value = cfg.url ?? defaultApiBase();
|
||||||
|
els.passphrase.value = cfg.passphrase ?? "";
|
||||||
|
els.bearer.value = getBearer() ? "••• (session)" : "";
|
||||||
|
}
|
||||||
|
applyConfig(); checkHealth(); syncIndex(); sse();
|
||||||
|
|
||||||
|
// ---------- UI wiring ----------
|
||||||
|
els.saveConn.onclick = async () => {
|
||||||
|
const c = { url: norm(els.shardUrl.value), passphrase: els.passphrase.value };
|
||||||
|
saveConfig(c); await checkHealth(); await syncIndex(); sse(true);
|
||||||
|
};
|
||||||
|
els.publish.onclick = publish;
|
||||||
|
els.discordStart.onclick = discordStart;
|
||||||
|
els.keySignIn.onclick = keySignIn;
|
||||||
|
els.panicWipe.onclick = panicWipe;
|
||||||
|
|
||||||
|
// Panic wipe hotkey (double-tap ESC)
|
||||||
|
let escT=0;
|
||||||
|
addEventListener("keydown", (e) => {
|
||||||
|
if (e.key === "Escape") {
|
||||||
|
const now = Date.now();
|
||||||
|
if (now - escT < 600) panicWipe();
|
||||||
|
escT = now;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------- Health / Index / SSE ----------
|
||||||
async function checkHealth() {
|
async function checkHealth() {
|
||||||
if (!cfg.url) return; els.health.textContent = "Checking…";
|
if (!cfg.url) return; els.health.textContent = "Checking…";
|
||||||
try { const r = await fetch(cfg.url + "/healthz"); els.health.textContent = r.ok ? "Connected ✔" : `Error: ${r.status}`; }
|
try { const r = await fetch(cfg.url + "/healthz"); els.health.textContent = r.ok ? "Connected ✔" : `Error: ${r.status}`; }
|
||||||
catch { els.health.textContent = "Not reachable"; }
|
catch { els.health.textContent = "Not reachable"; }
|
||||||
}
|
}
|
||||||
|
|
||||||
async function publish() {
|
|
||||||
if (!cfg.url) return msg("Set shard URL first.", true);
|
|
||||||
const title = els.title.value.trim(); const body = els.body.value; const vis = els.visibility.value;
|
|
||||||
try {
|
|
||||||
let blob, enc=false;
|
|
||||||
if (vis === "private") {
|
|
||||||
if (!cfg.passphrase) return msg("Set a passphrase for private posts.", true);
|
|
||||||
const payload = await encryptString(JSON.stringify({ title, body }), cfg.passphrase);
|
|
||||||
blob = toBlob(payload); enc=true;
|
|
||||||
} else { blob = toBlob(JSON.stringify({ title, body })); }
|
|
||||||
const headers = { "Content-Type":"application/octet-stream" };
|
|
||||||
if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
|
|
||||||
if (enc) headers["X-GC-Private"] = "1";
|
|
||||||
const r = await fetch(cfg.url + "/v1/object", { method:"PUT", headers, body: blob });
|
|
||||||
if (!r.ok) throw new Error(await r.text());
|
|
||||||
const j = await r.json();
|
|
||||||
const posts = getPosts();
|
|
||||||
posts.unshift({ hash:j.hash, title: title || "(untitled)", bytes:j.bytes, ts:j.stored_at, enc });
|
|
||||||
setPosts(posts);
|
|
||||||
els.body.value = ""; msg(`Published ${enc?"private":"public"} post. Hash: ${j.hash}`);
|
|
||||||
} catch(e){ msg("Publish failed: " + (e?.message||e), true); }
|
|
||||||
}
|
|
||||||
|
|
||||||
function msg(t, err=false){ els.publishStatus.textContent=t; els.publishStatus.style.color = err ? "#ff6b6b" : "#8b949e"; }
|
|
||||||
|
|
||||||
async function syncIndex() {
|
async function syncIndex() {
|
||||||
if (!cfg.url) return;
|
if (!cfg.url) return;
|
||||||
try {
|
try {
|
||||||
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
|
const hdrs = {};
|
||||||
const r = await fetch(cfg.url + "/v1/index", { headers });
|
const b = getBearer();
|
||||||
|
if (b) Object.assign(hdrs, await popHeaders("GET", "/v1/index", new Uint8Array()));
|
||||||
|
const r = await fetch(cfg.url + "/v1/index", { headers: Object.assign(hdrs, b?{Authorization:"Bearer "+b}:{}) });
|
||||||
if (!r.ok) throw new Error("index fetch failed");
|
if (!r.ok) throw new Error("index fetch failed");
|
||||||
const entries = await r.json();
|
const entries = await r.json();
|
||||||
setPosts(entries.map(e => ({ hash:e.hash, title:"(title unknown — fetch)", bytes:e.bytes, ts:e.stored_at, enc:e.private })));
|
setPosts(entries.map(e => ({ hash:e.hash, title:"(title unknown — fetch)", bytes:e.bytes, ts:e.stored_at, enc:e.private, tz:e.creator_tz||"" })));
|
||||||
} catch(e){ console.warn("index sync failed", e); }
|
} catch(e){ console.warn("index sync failed", e); }
|
||||||
}
|
}
|
||||||
|
|
||||||
let sseCtrl;
|
let sseCtrl;
|
||||||
function sse(){
|
function sse(reset){
|
||||||
if (!cfg.url) return;
|
if (!cfg.url) return;
|
||||||
if (sseCtrl) { sseCtrl.abort(); sseCtrl = undefined; }
|
if (sseCtrl) { sseCtrl.abort(); sseCtrl = undefined; }
|
||||||
sseCtrl = new AbortController();
|
sseCtrl = new AbortController();
|
||||||
const url = cfg.url + "/v1/index/stream";
|
const url = cfg.url + "/v1/index/stream";
|
||||||
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
|
const b = getBearer();
|
||||||
fetch(url, { headers, signal: sseCtrl.signal }).then(async resp => {
|
const start = async () => {
|
||||||
if (!resp.ok) return;
|
const hdrs = {};
|
||||||
const reader = resp.body.getReader(); const decoder = new TextDecoder();
|
if (b) Object.assign(hdrs, await popHeaders("GET", "/v1/index/stream", new Uint8Array()), { Authorization: "Bearer "+b });
|
||||||
let buf = "";
|
fetch(url, { headers: hdrs, signal: sseCtrl.signal }).then(async resp => {
|
||||||
while (true) {
|
if (!resp.ok) return;
|
||||||
const { value, done } = await reader.read(); if (done) break;
|
const reader = resp.body.getReader(); const decoder = new TextDecoder();
|
||||||
buf += decoder.decode(value, { stream:true });
|
let buf = "";
|
||||||
let idx;
|
while (true) {
|
||||||
while ((idx = buf.indexOf("\n\n")) >= 0) {
|
const { value, done } = await reader.read(); if (done) break;
|
||||||
const chunk = buf.slice(0, idx); buf = buf.slice(idx+2);
|
buf += decoder.decode(value, { stream:true });
|
||||||
if (chunk.startsWith("data: ")) {
|
let idx; while ((idx = buf.indexOf("\n\n")) >= 0) {
|
||||||
try {
|
const chunk = buf.slice(0, idx); buf = buf.slice(idx+2);
|
||||||
const ev = JSON.parse(chunk.slice(6));
|
if (chunk.startsWith("data: ")) {
|
||||||
if (ev.event === "put") {
|
try {
|
||||||
const e = ev.data;
|
const ev = JSON.parse(chunk.slice(6));
|
||||||
const posts = getPosts();
|
if (ev.event === "put") {
|
||||||
if (!posts.find(p => p.hash === e.hash)) {
|
const e = ev.data;
|
||||||
posts.unshift({ hash:e.hash, title:"(title unknown — fetch)", bytes:e.bytes, ts:e.stored_at, enc:e.private });
|
const posts = getPosts();
|
||||||
setPosts(posts);
|
if (!posts.find(p => p.hash === e.hash)) {
|
||||||
|
posts.unshift({ hash:e.hash, title:"(title unknown — fetch)", bytes:e.bytes, ts:e.stored_at, enc:e.private, tz:e.creator_tz||"" });
|
||||||
|
setPosts(posts);
|
||||||
|
}
|
||||||
|
} else if (ev.event === "delete") {
|
||||||
|
const h = ev.data.hash; setPosts(getPosts().filter(p => p.hash !== h));
|
||||||
}
|
}
|
||||||
} else if (ev.event === "delete") {
|
} catch {}
|
||||||
const h = ev.data.hash; setPosts(getPosts().filter(p => p.hash !== h));
|
}
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}).catch(()=>{});
|
||||||
|
};
|
||||||
|
start();
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------- Auth ----------
|
||||||
|
async function keySignIn(){
|
||||||
|
try {
|
||||||
|
if (!cfg.url) { alert("Set shard URL first."); return; }
|
||||||
|
// 1) challenge
|
||||||
|
const cResp = await fetch(cfg.url + "/v1/auth/key/challenge", { method:"POST" });
|
||||||
|
const cTxt = await cResp.text();
|
||||||
|
if (!cResp.ok) { alert("Challenge failed: " + cTxt); return; }
|
||||||
|
const c = JSON.parse(cTxt);
|
||||||
|
// 2) sign and verify
|
||||||
|
const dev = await getDevice();
|
||||||
|
const priv = await crypto.subtle.importKey("pkcs8", ub64(dev.priv), { name:"ECDSA", namedCurve:"P-256" }, false, ["sign"]);
|
||||||
|
const msg = enc.encode("key-verify\n" + c.nonce);
|
||||||
|
const sig = await crypto.subtle.sign({ name:"ECDSA", hash:"SHA-256" }, priv, msg);
|
||||||
|
const vResp = await fetch(cfg.url + "/v1/auth/key/verify", {
|
||||||
|
method:"POST",
|
||||||
|
headers: { "Content-Type":"application/json" },
|
||||||
|
body: JSON.stringify({ nonce:c.nonce, alg:"p256", pub: dev.pub, sig: b64(sig) })
|
||||||
|
});
|
||||||
|
const vTxt = await vResp.text();
|
||||||
|
if (!vResp.ok) { alert("Verify failed: " + vTxt); return; }
|
||||||
|
const j = JSON.parse(vTxt);
|
||||||
|
setBearer(j.bearer);
|
||||||
|
alert("Signed in ✔ (session)");
|
||||||
|
await syncIndex();
|
||||||
|
} catch (e) {
|
||||||
|
alert("Key sign-in exception: " + (e?.message || e));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function panicWipe(){
|
||||||
|
try {
|
||||||
|
if (cfg.url) await fetch(cfg.url + "/v1/session/clear", { method:"POST" });
|
||||||
|
} catch {}
|
||||||
|
sessionStorage.clear();
|
||||||
|
localStorage.clear();
|
||||||
|
caches && caches.keys().then(keys => keys.forEach(k => caches.delete(k)));
|
||||||
|
location.replace("about:blank");
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------- Publishing / Viewing ----------
|
||||||
|
function msg(t, err=false){ els.publishStatus.textContent=t; els.publishStatus.style.color = err ? "#ff6b6b" : "inherit"; }
|
||||||
|
|
||||||
|
async function publish() {
|
||||||
|
if (!cfg.url) return msg("Set shard URL first.", true);
|
||||||
|
const b = getBearer(); if (!b) return msg("Sign in first (device key).", true);
|
||||||
|
|
||||||
|
const title = els.title.value.trim();
|
||||||
|
const body = els.body.value;
|
||||||
|
const vis = els.visibility.value;
|
||||||
|
try {
|
||||||
|
let blob, encp=false;
|
||||||
|
if (vis === "private") {
|
||||||
|
if (!cfg.passphrase) return msg("Set a passphrase for private posts.", true);
|
||||||
|
const payload = await encryptString(JSON.stringify({ title, body }), cfg.passphrase);
|
||||||
|
blob = toBlob(payload); encp=true;
|
||||||
|
} else {
|
||||||
|
blob = toBlob(JSON.stringify({ title, body }));
|
||||||
}
|
}
|
||||||
}).catch(()=>{});
|
const buf = new Uint8Array(await blob.arrayBuffer());
|
||||||
|
const path = "/v1/object";
|
||||||
|
const headers = { "Content-Type":"application/octet-stream", Authorization: "Bearer "+b };
|
||||||
|
if (encp) headers["X-GC-Private"] = "1";
|
||||||
|
const pop = await popHeaders("PUT", path, buf);
|
||||||
|
Object.assign(headers, pop);
|
||||||
|
const r = await fetch(cfg.url + path, { method:"PUT", headers, body: buf });
|
||||||
|
if (!r.ok) throw new Error(await r.text());
|
||||||
|
const j = await r.json();
|
||||||
|
const posts = getPosts();
|
||||||
|
posts.unshift({ hash:j.hash, title: title || "(untitled)", bytes:j.bytes, ts:j.stored_at, enc:j.private, tz:j.creator_tz||"" });
|
||||||
|
setPosts(posts);
|
||||||
|
els.body.value = ""; msg(`Published ${encp?"private":"public"} post. Hash: ${j.hash}`);
|
||||||
|
} catch(e){ msg("Publish failed: " + (e?.message||e), true); }
|
||||||
}
|
}
|
||||||
|
|
||||||
async function viewPost(p, pre) {
|
async function viewPost(p, pre) {
|
||||||
pre.textContent = "Loading…";
|
pre.textContent = "Loading…";
|
||||||
try {
|
try {
|
||||||
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
|
const path = "/v1/object/" + p.hash;
|
||||||
const r = await fetch(cfg.url + "/v1/object/" + p.hash, { headers });
|
const headers = {};
|
||||||
|
const b = getBearer();
|
||||||
|
if (b) Object.assign(headers, await popHeaders("GET", path, new Uint8Array()), { Authorization: "Bearer "+b });
|
||||||
|
const r = await fetch(cfg.url + path, { headers });
|
||||||
if (!r.ok) throw new Error("fetch failed " + r.status);
|
if (!r.ok) throw new Error("fetch failed " + r.status);
|
||||||
const buf = new Uint8Array(await r.arrayBuffer());
|
const buf = new Uint8Array(await r.arrayBuffer());
|
||||||
let text;
|
let text;
|
||||||
@@ -134,22 +283,29 @@ async function viewPost(p, pre) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
async function saveBlob(p) {
|
async function saveBlob(p) {
|
||||||
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
|
const path = "/v1/object/" + p.hash;
|
||||||
const r = await fetch(cfg.url + "/v1/object/" + p.hash, { headers });
|
const headers = {};
|
||||||
|
const b = getBearer();
|
||||||
|
if (b) Object.assign(headers, await popHeaders("GET", path, new Uint8Array()), { Authorization: "Bearer "+b });
|
||||||
|
const r = await fetch(cfg.url + path, { headers });
|
||||||
if (!r.ok) return alert("download failed " + r.status);
|
if (!r.ok) return alert("download failed " + r.status);
|
||||||
const b = await r.blob();
|
const bl = await r.blob();
|
||||||
const a = document.createElement("a"); a.href = URL.createObjectURL(b);
|
const a = document.createElement("a"); a.href = URL.createObjectURL(bl);
|
||||||
a.download = p.hash + (p.enc ? ".gcenc" : ".json"); a.click(); URL.revokeObjectURL(a.href);
|
a.download = p.hash + (p.enc ? ".gcenc" : ".json"); a.click(); URL.revokeObjectURL(a.href);
|
||||||
}
|
}
|
||||||
|
|
||||||
async function delServer(p) {
|
async function delServer(p) {
|
||||||
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
|
const path = "/v1/object/" + p.hash;
|
||||||
|
const b = getBearer(); if (!b) return alert("Sign in first.");
|
||||||
|
const headers = { Authorization: "Bearer "+b };
|
||||||
|
Object.assign(headers, await popHeaders("DELETE", path, new Uint8Array()));
|
||||||
if (!confirm("Delete blob from server by hash?")) return;
|
if (!confirm("Delete blob from server by hash?")) return;
|
||||||
const r = await fetch(cfg.url + "/v1/object/" + p.hash, { method:"DELETE", headers });
|
const r = await fetch(cfg.url + path, { method:"DELETE", headers });
|
||||||
if (!r.ok) return alert("delete failed " + r.status);
|
if (!r.ok) return alert("delete failed " + r.status);
|
||||||
setPosts(getPosts().filter(x=>x.hash!==p.hash));
|
setPosts(getPosts().filter(x=>x.hash!==p.hash));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ---------- Discord SSO ----------
|
||||||
async function discordStart() {
|
async function discordStart() {
|
||||||
if (!cfg.url) { alert("Set shard URL first."); return; }
|
if (!cfg.url) { alert("Set shard URL first."); return; }
|
||||||
const r = await fetch(cfg.url + "/v1/auth/discord/start", { headers: { "X-GC-3P-Assent":"1" }});
|
const r = await fetch(cfg.url + "/v1/auth/discord/start", { headers: { "X-GC-3P-Assent":"1" }});
|
||||||
@@ -158,6 +314,7 @@ async function discordStart() {
|
|||||||
location.href = j.url;
|
location.href = j.url;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ---------- Render ----------
|
||||||
function renderPosts() {
|
function renderPosts() {
|
||||||
const posts = getPosts(); els.posts.innerHTML = "";
|
const posts = getPosts(); els.posts.innerHTML = "";
|
||||||
for (const p of posts) {
|
for (const p of posts) {
|
||||||
@@ -171,7 +328,7 @@ function renderPosts() {
|
|||||||
<button data-act="delete">Delete (server)</button>
|
<button data-act="delete">Delete (server)</button>
|
||||||
<button data-act="remove">Remove (local)</button>
|
<button data-act="remove">Remove (local)</button>
|
||||||
</div>
|
</div>
|
||||||
<pre class="content" style="white-space:pre-wrap;margin-top:.5rem;"></pre>`;
|
<pre class="content"></pre>`;
|
||||||
const pre = div.querySelector(".content");
|
const pre = div.querySelector(".content");
|
||||||
div.querySelector('[data-act="view"]').onclick = () => viewPost(p, pre);
|
div.querySelector('[data-act="view"]').onclick = () => viewPost(p, pre);
|
||||||
div.querySelector('[data-act="save"]').onclick = () => saveBlob(p);
|
div.querySelector('[data-act="save"]').onclick = () => saveBlob(p);
|
||||||
|
@@ -1,43 +1,20 @@
|
|||||||
<!doctype html>
|
<!doctype html>
|
||||||
<html>
|
<meta charset="utf-8">
|
||||||
<head>
|
<title>Signing you in…</title>
|
||||||
<meta charset="utf-8"/>
|
<script>
|
||||||
<title>GreenCoast — Auth Callback</title>
|
(function(){
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1"/>
|
const hash = new URLSearchParams(location.hash.slice(1));
|
||||||
<style>
|
const bearer = hash.get("bearer");
|
||||||
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Arial; background:#0b1117; color:#e6edf3; display:flex; align-items:center; justify-content:center; height:100vh; }
|
const next = hash.get("next") || "/";
|
||||||
.card { background:#0f1621; padding:1rem 1.2rem; border-radius:14px; max-width:560px; }
|
try {
|
||||||
.muted{ color:#8b949e; }
|
// Prefer sessionStorage; keep localStorage for backward compatibility
|
||||||
</style>
|
if (bearer) sessionStorage.setItem("gc_bearer", bearer);
|
||||||
</head>
|
const k = "gc_client_config_v1";
|
||||||
<body>
|
const cfg = JSON.parse(localStorage.getItem(k) || "{}");
|
||||||
<div class="card">
|
if (bearer) cfg.bearer = bearer;
|
||||||
<h3>Signing you in…</h3>
|
localStorage.setItem(k, JSON.stringify(cfg));
|
||||||
<div id="msg" class="muted">Please wait.</div>
|
} catch {}
|
||||||
</div>
|
history.replaceState(null, "", next);
|
||||||
<script type="module">
|
location.href = next;
|
||||||
const params = new URLSearchParams(location.search);
|
})();
|
||||||
const code = params.get("code");
|
|
||||||
const origin = location.origin; // shard and client served together
|
|
||||||
const msg = (t)=>document.getElementById("msg").textContent = t;
|
|
||||||
|
|
||||||
async function run() {
|
|
||||||
if (!code) { msg("Missing 'code' parameter."); return; }
|
|
||||||
try {
|
|
||||||
const r = await fetch(origin + "/v1/auth/discord/callback?assent=1&code=" + encodeURIComponent(code));
|
|
||||||
if (!r.ok) { msg("Exchange failed: " + r.status); return; }
|
|
||||||
const j = await r.json();
|
|
||||||
const key = "gc_client_config_v1";
|
|
||||||
const cfg = JSON.parse(localStorage.getItem(key) || "{}");
|
|
||||||
cfg.bearer = j.token;
|
|
||||||
localStorage.setItem(key, JSON.stringify(cfg));
|
|
||||||
msg("Success. Redirecting…");
|
|
||||||
setTimeout(()=>location.href="/", 800);
|
|
||||||
} catch(e) {
|
|
||||||
msg("Error: " + (e?.message || e));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
run();
|
|
||||||
</script>
|
</script>
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
|
@@ -5,6 +5,8 @@
|
|||||||
<title>GreenCoast — Client</title>
|
<title>GreenCoast — Client</title>
|
||||||
<meta name="viewport" content="width=device-width,initial-scale=1"/>
|
<meta name="viewport" content="width=device-width,initial-scale=1"/>
|
||||||
<link rel="stylesheet" href="./styles.css"/>
|
<link rel="stylesheet" href="./styles.css"/>
|
||||||
|
<!-- Optional: explicit API base -->
|
||||||
|
<meta name="gc-api-base" content="https://api-gc.fullmooncyberworks.com">
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<div class="container">
|
<div class="container">
|
||||||
@@ -14,11 +16,11 @@
|
|||||||
<h2>Connect</h2>
|
<h2>Connect</h2>
|
||||||
<div class="row">
|
<div class="row">
|
||||||
<label>Shard URL</label>
|
<label>Shard URL</label>
|
||||||
<input id="shardUrl" placeholder="http://localhost:8080" />
|
<input id="shardUrl" placeholder="https://api-gc.fullmooncyberworks.com" />
|
||||||
</div>
|
</div>
|
||||||
<div class="row">
|
<div class="row">
|
||||||
<label>Bearer (optional)</label>
|
<label>Bearer (session)</label>
|
||||||
<input id="bearer" placeholder="dev-local-token" />
|
<input id="bearer" placeholder="(auto after sign-in)" disabled />
|
||||||
</div>
|
</div>
|
||||||
<div class="row">
|
<div class="row">
|
||||||
<label>Passphrase (private posts)</label>
|
<label>Passphrase (private posts)</label>
|
||||||
@@ -28,12 +30,16 @@
|
|||||||
<label>3rd-party SSO</label>
|
<label>3rd-party SSO</label>
|
||||||
<div>
|
<div>
|
||||||
<button id="discordStart">Sign in with Discord</button>
|
<button id="discordStart">Sign in with Discord</button>
|
||||||
<div class="muted" style="margin-top:.4rem;">
|
<div class="muted" id="ssoNote">
|
||||||
We use external providers only if you choose to. We cannot vouch for their security.
|
We use external providers only if you choose to. We cannot vouch for their security.
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<button id="saveConn">Save</button>
|
<div class="actions">
|
||||||
|
<button id="saveConn">Save</button>
|
||||||
|
<button id="keySignIn">Sign in (device key)</button>
|
||||||
|
<button id="panicWipe" class="danger">Panic wipe</button>
|
||||||
|
</div>
|
||||||
<div id="health" class="muted"></div>
|
<div id="health" class="muted"></div>
|
||||||
</section>
|
</section>
|
||||||
|
|
||||||
@@ -42,8 +48,8 @@
|
|||||||
<div class="row">
|
<div class="row">
|
||||||
<label>Visibility</label>
|
<label>Visibility</label>
|
||||||
<select id="visibility">
|
<select id="visibility">
|
||||||
<option value="public">Public (plaintext)</option>
|
|
||||||
<option value="private">Private (E2EE via passphrase)</option>
|
<option value="private">Private (E2EE via passphrase)</option>
|
||||||
|
<option value="public">Public (plaintext)</option>
|
||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
<div class="row">
|
<div class="row">
|
||||||
@@ -54,7 +60,9 @@
|
|||||||
<label>Body</label>
|
<label>Body</label>
|
||||||
<textarea id="body" rows="6" placeholder="Write your post..."></textarea>
|
<textarea id="body" rows="6" placeholder="Write your post..."></textarea>
|
||||||
</div>
|
</div>
|
||||||
<button id="publish">Publish</button>
|
<div class="actions">
|
||||||
|
<button id="publish">Publish</button>
|
||||||
|
</div>
|
||||||
<div id="publishStatus" class="muted"></div>
|
<div id="publishStatus" class="muted"></div>
|
||||||
</section>
|
</section>
|
||||||
|
|
||||||
|
@@ -1,18 +1,15 @@
|
|||||||
:root { --bg:#0b1117; --card:#0f1621; --fg:#e6edf3; --muted:#8b949e; --accent:#2ea043; }
|
:root { color-scheme: light dark; }
|
||||||
* { box-sizing: border-box; }
|
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Ubuntu, Cantarell, "Noto Sans", sans-serif; margin: 0; padding: 2rem; }
|
||||||
body { margin:0; font-family: ui-sans-serif, system-ui, -apple-system, Segoe UI, Roboto, Arial; background:var(--bg); color:var(--fg); }
|
.container { max-width: 860px; margin: 0 auto; }
|
||||||
.container { max-width: 900px; margin: 2rem auto; padding: 0 1rem; }
|
h1 { margin: 0 0 1rem 0; }
|
||||||
h1 { font-size: 1.5rem; margin-bottom: 1rem; }
|
.card { border: 1px solid #30363d; border-radius: 16px; padding: 1rem; margin: 1rem 0; box-shadow: 0 2px 6px rgba(0,0,0,.1); }
|
||||||
.card { background: var(--card); border-radius: 14px; padding: 1rem; margin-bottom: 1rem; box-shadow: 0 8px 24px rgba(0,0,0,.3); }
|
.row { display: grid; grid-template-columns: 160px 1fr; gap: .8rem; align-items: center; margin: .6rem 0; }
|
||||||
h2 { margin-top: 0; font-size: 1.1rem; }
|
label { opacity: .8; }
|
||||||
.row { display: grid; grid-template-columns: 160px 1fr; gap: .75rem; align-items: center; margin: .5rem 0; }
|
input, textarea, select, button { font: inherit; padding: .6rem .7rem; border-radius: 10px; border: 1px solid #30363d; background: transparent; color: inherit; }
|
||||||
label { color: var(--muted); }
|
button { cursor: pointer; }
|
||||||
input, select, textarea { width: 100%; padding: .6rem .7rem; border-radius: 10px; border: 1px solid #233; background: #0b1520; color: var(--fg); }
|
button.danger { border-color: #a4002a; color: #a4002a; }
|
||||||
button { background: var(--accent); color: #08130b; border: none; padding: .6rem .9rem; border-radius: 10px; cursor: pointer; font-weight: 700; }
|
.actions { display: flex; gap: .6rem; flex-wrap: wrap; margin-top: .4rem; }
|
||||||
button:hover { filter: brightness(1.05); }
|
.muted { opacity: .7; font-size: .9rem; }
|
||||||
.muted { color: var(--muted); margin-top: .5rem; font-size: .9rem; }
|
.badge { display: inline-block; padding: .1rem .4rem; border-radius: 8px; border: 1px solid #30363d; font-size: .75rem; margin-left: .4rem; }
|
||||||
.post { border: 1px solid #1d2734; border-radius: 12px; padding: .75rem; margin: .5rem 0; background: #0c1824; }
|
.post { border-top: 1px dashed #30363d; padding: .6rem 0; }
|
||||||
.post .meta { font-size: .85rem; color: var(--muted); margin-bottom: .4rem; }
|
pre.content { white-space: pre-wrap; margin-top: .5rem; }
|
||||||
.post .actions { margin-top: .5rem; display:flex; gap:.5rem; }
|
|
||||||
code { background:#0a1320; padding:.15rem .35rem; border-radius:6px; }
|
|
||||||
.badge { font-size:.75rem; padding:.1rem .4rem; border-radius: 999px; background:#132235; color:#9fb7d0; margin-left:.5rem; }
|
|
||||||
|
@@ -1,89 +1,163 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"flag"
|
|
||||||
"log"
|
"log"
|
||||||
"path/filepath"
|
"net/http"
|
||||||
|
"os"
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
"greencoast/internal/api"
|
"greencoast/internal/api"
|
||||||
"greencoast/internal/config"
|
|
||||||
"greencoast/internal/federation"
|
|
||||||
"greencoast/internal/index"
|
"greencoast/internal/index"
|
||||||
"greencoast/internal/storage"
|
"greencoast/internal/storage"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func getenvBool(key string, def bool) bool {
|
||||||
cfgPath := flag.String("config", "shard.yaml", "path to config")
|
v := os.Getenv(key)
|
||||||
flag.Parse()
|
if v == "" {
|
||||||
|
return def
|
||||||
cfg, err := config.Load(*cfgPath)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("config error: %v", err)
|
|
||||||
}
|
}
|
||||||
|
b, err := strconv.ParseBool(v)
|
||||||
store, err := storage.NewFSStore(cfg.Storage.Path, cfg.Storage.MaxObjectKB)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("storage error: %v", err)
|
return def
|
||||||
}
|
}
|
||||||
|
return b
|
||||||
|
}
|
||||||
|
|
||||||
dataRoot := filepath.Dir(cfg.Storage.Path)
|
func staticHeaders(next http.Handler) http.Handler {
|
||||||
idx := index.New(dataRoot)
|
onion := os.Getenv("GC_ONION_LOCATION") // optional: e.g., http://xxxxxxxx.onion/
|
||||||
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
srv := api.New(
|
// Security headers + strict CSP (no inline) + COEP
|
||||||
store, idx,
|
w.Header().Set("Referrer-Policy", "no-referrer")
|
||||||
cfg.Privacy.RetainTimestamps == "coarse",
|
w.Header().Set("Cross-Origin-Opener-Policy", "same-origin")
|
||||||
cfg.Security.ZeroTrust,
|
w.Header().Set("Cross-Origin-Resource-Policy", "same-site")
|
||||||
api.AuthProviders{
|
w.Header().Set("Permissions-Policy", "camera=(), microphone=(), geolocation=(), interest-cohort=(), browsing-topics=()")
|
||||||
SigningSecretHex: cfg.Auth.SigningSecret,
|
w.Header().Set("X-Frame-Options", "DENY")
|
||||||
Discord: api.DiscordProvider{
|
w.Header().Set("X-Content-Type-Options", "nosniff")
|
||||||
Enabled: cfg.Auth.SSO.Discord.Enabled,
|
w.Header().Set("Strict-Transport-Security", "max-age=15552000; includeSubDomains; preload")
|
||||||
ClientID: cfg.Auth.SSO.Discord.ClientID,
|
w.Header().Set("Cross-Origin-Embedder-Policy", "require-corp")
|
||||||
ClientSecret: cfg.Auth.SSO.Discord.ClientSecret,
|
// Allow only self + HTTPS for fetch/SSE; no inline styles/scripts
|
||||||
RedirectURI: cfg.Auth.SSO.Discord.RedirectURI,
|
w.Header().Set("Content-Security-Policy",
|
||||||
},
|
"default-src 'self'; "+
|
||||||
GoogleEnabled: cfg.Auth.SSO.Google.Enabled,
|
"script-src 'self'; "+
|
||||||
FacebookEnabled: cfg.Auth.SSO.Facebook.Enabled,
|
"style-src 'self'; "+
|
||||||
WebAuthnEnabled: cfg.Auth.TwoFactor.WebAuthnEnabled,
|
"img-src 'self' data:; "+
|
||||||
TOTPEnabled: cfg.Auth.TwoFactor.TOTPEnabled,
|
"connect-src 'self' https:; "+
|
||||||
},
|
"frame-ancestors 'none'; object-src 'none'; base-uri 'none'; form-action 'self'; "+
|
||||||
)
|
"require-trusted-types-for 'script'")
|
||||||
|
if onion != "" {
|
||||||
// Optional: also mount static under API mux (subpath) if you later want that.
|
w.Header().Set("Onion-Location", onion)
|
||||||
// srv.MountStatic(cfg.UI.Path, "/app")
|
|
||||||
|
|
||||||
// Start federation mTLS (if enabled)
|
|
||||||
if cfg.Federation.MTLSEnable {
|
|
||||||
tlsCfg, err := federation.ServerTLSConfig(
|
|
||||||
cfg.Federation.CertFile,
|
|
||||||
cfg.Federation.KeyFile,
|
|
||||||
cfg.Federation.ClientCAFile,
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("federation tls config error: %v", err)
|
|
||||||
}
|
}
|
||||||
go func() {
|
|
||||||
if err := srv.ListenMTLS(cfg.Federation.Listen, tlsCfg); err != nil {
|
// Basic CORS for static (GET only effectively)
|
||||||
log.Fatalf("federation mTLS listener error: %v", err)
|
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||||
}
|
if r.Method == http.MethodOptions {
|
||||||
}()
|
w.Header().Set("Access-Control-Allow-Methods", "GET, OPTIONS")
|
||||||
|
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
|
||||||
|
w.WriteHeader(http.StatusNoContent)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
// ---- Config ----
|
||||||
|
httpAddr := os.Getenv("GC_HTTP_ADDR")
|
||||||
|
if httpAddr == "" {
|
||||||
|
httpAddr = ":9080"
|
||||||
|
}
|
||||||
|
httpsAddr := os.Getenv("GC_HTTPS_ADDR")
|
||||||
|
certFile := os.Getenv("GC_TLS_CERT")
|
||||||
|
keyFile := os.Getenv("GC_TLS_KEY")
|
||||||
|
|
||||||
|
staticAddr := os.Getenv("GC_STATIC_ADDR")
|
||||||
|
if staticAddr == "" {
|
||||||
|
staticAddr = ":9082"
|
||||||
|
}
|
||||||
|
staticDir := os.Getenv("GC_STATIC_DIR")
|
||||||
|
if staticDir == "" {
|
||||||
|
staticDir = "/opt/greencoast/client"
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start FRONTEND listener (separate port) if enabled
|
dataDir := os.Getenv("GC_DATA_DIR")
|
||||||
if cfg.UI.Enable && cfg.UI.FrontendHTTP != "" {
|
if dataDir == "" {
|
||||||
go func() {
|
dataDir = "/var/lib/greencoast"
|
||||||
if err := srv.ListenFrontendHTTP(cfg.UI.FrontendHTTP, cfg.UI.Path, cfg.UI.BaseURL); err != nil {
|
|
||||||
log.Fatalf("frontend listener error: %v", err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Choose ONE foreground listener for API: HTTPS if enabled, else HTTP.
|
coarseTS := getenvBool("GC_COARSE_TS", true) // safer default (less precise metadata)
|
||||||
if cfg.TLS.Enable && cfg.Listen.HTTPS != "" {
|
zeroTrust := getenvBool("GC_ZERO_TRUST", true)
|
||||||
log.Fatal(srv.ListenHTTPS(cfg.Listen.HTTPS, cfg.TLS.CertFile, cfg.TLS.KeyFile))
|
encRequired := getenvBool("GC_ENCRYPTION_REQUIRED", true) // operator-blind by default
|
||||||
|
requirePOP := getenvBool("GC_REQUIRE_POP", true) // logged only here
|
||||||
|
|
||||||
|
signingSecretHex := os.Getenv("GC_SIGNING_SECRET_HEX")
|
||||||
|
if len(signingSecretHex) < 64 {
|
||||||
|
log.Printf("WARN: GC_SIGNING_SECRET_HEX length=%d (need >=64 hex chars)", len(signingSecretHex))
|
||||||
|
} else {
|
||||||
|
log.Printf("GC_SIGNING_SECRET_HEX OK (len=%d)", len(signingSecretHex))
|
||||||
|
}
|
||||||
|
|
||||||
|
discID := os.Getenv("GC_DISCORD_CLIENT_ID")
|
||||||
|
discSecret := os.Getenv("GC_DISCORD_CLIENT_SECRET")
|
||||||
|
discRedirect := os.Getenv("GC_DISCORD_REDIRECT_URI")
|
||||||
|
|
||||||
|
// ---- Storage & Index ----
|
||||||
|
store, err := storage.NewFS(dataDir)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("storage init: %v", err)
|
||||||
|
}
|
||||||
|
ix := index.New()
|
||||||
|
|
||||||
|
// Reindex on boot from existing files (coarse time if enabled)
|
||||||
|
if err := store.Walk(func(hash string, size int64, mod time.Time) error {
|
||||||
|
when := mod.UTC()
|
||||||
|
if coarseTS {
|
||||||
|
when = when.Truncate(time.Minute)
|
||||||
|
}
|
||||||
|
return ix.Put(index.Entry{
|
||||||
|
Hash: hash,
|
||||||
|
Bytes: size,
|
||||||
|
StoredAt: when.Format(time.RFC3339Nano),
|
||||||
|
Private: false, // unknown here
|
||||||
|
})
|
||||||
|
}); err != nil {
|
||||||
|
log.Printf("reindex on boot: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- Auth providers ----
|
||||||
|
providers := api.AuthProviders{
|
||||||
|
SigningSecretHex: signingSecretHex,
|
||||||
|
Discord: api.DiscordProvider{
|
||||||
|
Enabled: discID != "" && discSecret != "" && discRedirect != "",
|
||||||
|
ClientID: discID,
|
||||||
|
ClientSecret: discSecret,
|
||||||
|
RedirectURI: discRedirect,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- API server ----
|
||||||
|
srv := api.New(store, ix, coarseTS, zeroTrust, providers, encRequired)
|
||||||
|
|
||||||
|
// ---- Static file server (separate listener) ----
|
||||||
|
go func() {
|
||||||
|
fs := http.FileServer(http.Dir(staticDir))
|
||||||
|
h := staticHeaders(fs)
|
||||||
|
log.Printf("static listening on %s (dir=%s)", staticAddr, staticDir)
|
||||||
|
if err := http.ListenAndServe(staticAddr, h); err != nil {
|
||||||
|
log.Fatalf("static server: %v", err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// ---- Start API (HTTP or HTTPS) ----
|
||||||
|
if httpsAddr != "" && certFile != "" && keyFile != "" {
|
||||||
|
log.Printf("API HTTPS %s POP:%v ENC_REQUIRED:%v", httpsAddr, requirePOP, encRequired)
|
||||||
|
if err := srv.ListenHTTPS(httpsAddr, certFile, keyFile); err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if cfg.Listen.HTTP == "" {
|
log.Printf("API HTTP %s POP:%v ENC_REQUIRED:%v", httpAddr, requirePOP, encRequired)
|
||||||
log.Fatal("no API listeners configured (set listen.http or listen.https)")
|
if err := srv.ListenHTTP(httpAddr); err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
log.Fatal(srv.ListenHTTP(cfg.Listen.HTTP))
|
|
||||||
}
|
}
|
||||||
|
69
configs/shard.test.yaml
Normal file
69
configs/shard.test.yaml
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
shard_id: "gc-test-001"
|
||||||
|
|
||||||
|
listen:
|
||||||
|
http: "0.0.0.0:9080" # API for testers
|
||||||
|
https: "" # if you terminate TLS at a proxy, leave empty
|
||||||
|
ws: "0.0.0.0:9081" # reserved
|
||||||
|
|
||||||
|
tls:
|
||||||
|
enable: false # set true only if serving HTTPS directly here
|
||||||
|
cert_file: "/etc/greencoast/tls/cert.pem"
|
||||||
|
key_file: "/etc/greencoast/tls/key.pem"
|
||||||
|
|
||||||
|
federation:
|
||||||
|
mtls_enable: false
|
||||||
|
listen: "0.0.0.0:9443"
|
||||||
|
cert_file: "/etc/greencoast/fed/cert.pem"
|
||||||
|
key_file: "/etc/greencoast/fed/key.pem"
|
||||||
|
client_ca_file: "/etc/greencoast/fed/clients_ca.pem"
|
||||||
|
|
||||||
|
ui:
|
||||||
|
enable: true
|
||||||
|
path: "./client"
|
||||||
|
base_url: "/"
|
||||||
|
frontend_http: "0.0.0.0:9082" # static client for testers
|
||||||
|
|
||||||
|
storage:
|
||||||
|
backend: "fs"
|
||||||
|
path: "/var/lib/greencoast/objects"
|
||||||
|
max_object_kb: 128 # lower if you want to constrain uploads
|
||||||
|
|
||||||
|
security:
|
||||||
|
zero_trust: true
|
||||||
|
require_mtls_for_federation: true
|
||||||
|
accept_client_signed_tokens: true
|
||||||
|
log_level: "warn"
|
||||||
|
|
||||||
|
privacy:
|
||||||
|
retain_ip: "no"
|
||||||
|
retain_user_agent: "no"
|
||||||
|
retain_timestamps: "coarse"
|
||||||
|
|
||||||
|
auth:
|
||||||
|
# IMPORTANT: rotate this per environment (use `openssl rand -hex 32`)
|
||||||
|
signing_secret: "D941C4F91D0046D28CDBC3F425DE0B4EA26BD2A80434E0F160D1B7C813EB43F8"
|
||||||
|
sso:
|
||||||
|
discord:
|
||||||
|
enabled: true
|
||||||
|
client_id: "1408292766319906946"
|
||||||
|
client_secret: "zJ6GnUUykHbMFbWsPPneNxNK-PtOXYg1"
|
||||||
|
# must exactly match your Discord app's allowed redirect
|
||||||
|
redirect_uri: "https://greencoast.fullmooncyberworks.com/auth-callback.html"
|
||||||
|
google:
|
||||||
|
enabled: false
|
||||||
|
client_id: ""
|
||||||
|
client_secret: ""
|
||||||
|
redirect_uri: ""
|
||||||
|
facebook:
|
||||||
|
enabled: false
|
||||||
|
client_id: ""
|
||||||
|
client_secret: ""
|
||||||
|
redirect_uri: ""
|
||||||
|
two_factor:
|
||||||
|
webauthn_enabled: false
|
||||||
|
totp_enabled: false
|
||||||
|
|
||||||
|
limits:
|
||||||
|
rate:
|
||||||
|
burst: 20
|
||||||
|
per_minute: 60 # slightly tighter for external testing
|
26
docker-compose.test.yml
Normal file
26
docker-compose.test.yml
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
version: "3.9"
|
||||||
|
|
||||||
|
services:
|
||||||
|
shard-test:
|
||||||
|
build: .
|
||||||
|
container_name: greencoast-shard-test
|
||||||
|
restart: unless-stopped
|
||||||
|
user: "0:0"
|
||||||
|
# These ports are optional (useful for local debug). Tunnel doesn't need them.
|
||||||
|
ports:
|
||||||
|
- "9080:9080" # API
|
||||||
|
- "9082:9082" # Frontend
|
||||||
|
environment:
|
||||||
|
- GC_DEV_ALLOW_UNAUTH=true
|
||||||
|
- GC_SIGNING_SECRET_HEX=92650f92d67d55368c852713a5007b90d933bff507bc77c980de7bf5442844ca
|
||||||
|
volumes:
|
||||||
|
- ./testdata:/var/lib/greencoast
|
||||||
|
- ./configs/shard.test.yaml:/app/shard.yaml:ro
|
||||||
|
- ./client:/app/client:ro
|
||||||
|
|
||||||
|
cloudflared:
|
||||||
|
image: cloudflare/cloudflared:latest
|
||||||
|
command: tunnel --no-autoupdate run --token ${CF_TUNNEL_TOKEN}
|
||||||
|
restart: unless-stopped
|
||||||
|
depends_on:
|
||||||
|
- shard-test
|
@@ -11,6 +11,7 @@ services:
|
|||||||
- "8081:8081"
|
- "8081:8081"
|
||||||
environment:
|
environment:
|
||||||
- GC_DEV_ALLOW_UNAUTH=false
|
- GC_DEV_ALLOW_UNAUTH=false
|
||||||
|
- GC_SIGNING_SECRET_HEX=92650f92d67d55368c852713a5007b90d933bff507bc77c980de7bf5442844ca
|
||||||
volumes:
|
volumes:
|
||||||
- gc_data:/var/lib/greencoast
|
- gc_data:/var/lib/greencoast
|
||||||
- ./configs/shard.sample.yaml:/app/shard.yaml:ro
|
- ./configs/shard.sample.yaml:/app/shard.yaml:ro
|
||||||
|
1238
internal/api/http.go
1238
internal/api/http.go
File diff suppressed because it is too large
Load Diff
78
internal/api/ratelimit.go
Normal file
78
internal/api/ratelimit.go
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
package api
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type rateLimiter struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
bk map[string]*bucket
|
||||||
|
rate float64 // tokens per second
|
||||||
|
burst float64
|
||||||
|
window time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
type bucket struct {
|
||||||
|
tokens float64
|
||||||
|
last time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
func newRateLimiter(rps float64, burst int, window time.Duration) *rateLimiter {
|
||||||
|
return &rateLimiter{
|
||||||
|
bk: make(map[string]*bucket),
|
||||||
|
rate: rps,
|
||||||
|
burst: float64(burst),
|
||||||
|
window: window,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rl *rateLimiter) allow(key string) bool {
|
||||||
|
now := time.Now()
|
||||||
|
rl.mu.Lock()
|
||||||
|
defer rl.mu.Unlock()
|
||||||
|
|
||||||
|
b := rl.bk[key]
|
||||||
|
if b == nil {
|
||||||
|
b = &bucket{tokens: rl.burst, last: now}
|
||||||
|
rl.bk[key] = b
|
||||||
|
}
|
||||||
|
// refill
|
||||||
|
elapsed := now.Sub(b.last).Seconds()
|
||||||
|
b.tokens = min(rl.burst, b.tokens+elapsed*rl.rate)
|
||||||
|
b.last = now
|
||||||
|
|
||||||
|
if b.tokens < 1.0 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
b.tokens -= 1.0
|
||||||
|
|
||||||
|
// occasional cleanup
|
||||||
|
for k, v := range rl.bk {
|
||||||
|
if now.Sub(v.last) > rl.window {
|
||||||
|
delete(rl.bk, k)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func min(a, b float64) float64 {
|
||||||
|
if a < b {
|
||||||
|
return a
|
||||||
|
}
|
||||||
|
return b
|
||||||
|
}
|
||||||
|
|
||||||
|
func clientIP(r *http.Request) string {
|
||||||
|
// Prefer Cloudflare’s header if present; fall back to RemoteAddr.
|
||||||
|
if ip := r.Header.Get("CF-Connecting-IP"); ip != "" {
|
||||||
|
return ip
|
||||||
|
}
|
||||||
|
host, _, err := net.SplitHostPort(r.RemoteAddr)
|
||||||
|
if err != nil {
|
||||||
|
return r.RemoteAddr
|
||||||
|
}
|
||||||
|
return host
|
||||||
|
}
|
@@ -1,71 +1,29 @@
|
|||||||
package api
|
package api
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"log"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Mount static on the API mux (kept for compatibility; still serves under API port if you want)
|
// secureHeaders adds strict, privacy-preserving headers to static responses.
|
||||||
func (s *Server) MountStatic(dir string, baseURL string) {
|
func (s *Server) secureHeaders(next http.Handler) http.Handler {
|
||||||
if dir == "" {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if baseURL == "" {
|
|
||||||
baseURL = "/"
|
|
||||||
}
|
|
||||||
s.mux.Handle(baseURL, s.staticHandler(dir, baseURL))
|
|
||||||
if !strings.HasSuffix(baseURL, "/") {
|
|
||||||
s.mux.Handle(baseURL+"/", s.staticHandler(dir, baseURL))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// NEW: serve the same static handler on its own port (frontend).
|
|
||||||
func (s *Server) ListenFrontendHTTP(addr, dir, baseURL string) error {
|
|
||||||
if dir == "" || addr == "" {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
log.Printf("frontend listening on %s (dir=%s base=%s)", addr, dir, baseURL)
|
|
||||||
mx := http.NewServeMux()
|
|
||||||
mx.Handle(baseURL, s.staticHandler(dir, baseURL))
|
|
||||||
if !strings.HasSuffix(baseURL, "/") {
|
|
||||||
mx.Handle(baseURL+"/", s.staticHandler(dir, baseURL))
|
|
||||||
}
|
|
||||||
server := &http.Server{
|
|
||||||
Addr: addr,
|
|
||||||
Handler: mx,
|
|
||||||
ReadHeaderTimeout: 5 * time.Second,
|
|
||||||
}
|
|
||||||
return server.ListenAndServe()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Server) staticHandler(dir, baseURL string) http.Handler {
|
|
||||||
if baseURL == "" {
|
|
||||||
baseURL = "/"
|
|
||||||
}
|
|
||||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
s.secureHeaders(w)
|
w.Header().Set("Referrer-Policy", "no-referrer")
|
||||||
up := strings.TrimPrefix(r.URL.Path, baseURL)
|
w.Header().Set("Cross-Origin-Opener-Policy", "same-origin")
|
||||||
if up == "" || strings.HasSuffix(r.URL.Path, "/") {
|
w.Header().Set("Cross-Origin-Resource-Policy", "same-site")
|
||||||
up = "index.html"
|
w.Header().Set("Permissions-Policy", "camera=(), microphone=(), geolocation=(), interest-cohort=(), browsing-topics=()")
|
||||||
}
|
w.Header().Set("X-Frame-Options", "DENY")
|
||||||
full := filepath.Join(dir, filepath.FromSlash(up))
|
w.Header().Set("X-Content-Type-Options", "nosniff")
|
||||||
if !strings.HasPrefix(filepath.Clean(full), filepath.Clean(dir)) {
|
w.Header().Set("Strict-Transport-Security", "max-age=15552000; includeSubDomains; preload")
|
||||||
http.NotFound(w, r)
|
next.ServeHTTP(w, r)
|
||||||
return
|
|
||||||
}
|
|
||||||
if st, err := os.Stat(full); err == nil && !st.IsDir() {
|
|
||||||
http.ServeFile(w, r, full)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fallback := filepath.Join(dir, "index.html")
|
|
||||||
if _, err := os.Stat(fallback); err == nil {
|
|
||||||
http.ServeFile(w, r, fallback)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
http.NotFound(w, r)
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MountStatic mounts a static file server under a prefix onto the provided mux.
|
||||||
|
// Usage (from main): s.MountStatic(mux, "/", http.Dir(staticDir))
|
||||||
|
func (s *Server) MountStatic(mux *http.ServeMux, prefix string, fs http.FileSystem) {
|
||||||
|
if prefix == "" {
|
||||||
|
prefix = "/"
|
||||||
|
}
|
||||||
|
h := http.StripPrefix(prefix, http.FileServer(fs))
|
||||||
|
mux.Handle(prefix, s.secureHeaders(h))
|
||||||
|
}
|
||||||
|
78
internal/auth/gc2.go
Normal file
78
internal/auth/gc2.go
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
package auth
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/hmac"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/hex"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Claims struct {
|
||||||
|
Sub string `json:"sub"` // account ID (acc_…)
|
||||||
|
Exp int64 `json:"exp"` // unix seconds
|
||||||
|
Nbf int64 `json:"nbf,omitempty"` // not before
|
||||||
|
Iss string `json:"iss,omitempty"` // greencoast
|
||||||
|
Aud string `json:"aud,omitempty"` // api
|
||||||
|
Jti string `json:"jti,omitempty"` // token id (optional)
|
||||||
|
CNF string `json:"cnf,omitempty"` // key binding: "p256:<b64raw>" or "ed25519:<b64raw>"
|
||||||
|
}
|
||||||
|
|
||||||
|
func MintGC2(signKey []byte, c Claims) (string, error) {
|
||||||
|
if len(signKey) == 0 {
|
||||||
|
return "", errors.New("sign key missing")
|
||||||
|
}
|
||||||
|
if c.Sub == "" || c.Exp == 0 {
|
||||||
|
return "", errors.New("claims incomplete")
|
||||||
|
}
|
||||||
|
body, _ := json.Marshal(c)
|
||||||
|
mac := hmac.New(sha256.New, signKey)
|
||||||
|
mac.Write(body)
|
||||||
|
sig := mac.Sum(nil)
|
||||||
|
return "gc2." + base64.RawURLEncoding.EncodeToString(body) + "." + base64.RawURLEncoding.EncodeToString(sig), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func VerifyGC2(signKey []byte, tok string, now time.Time) (Claims, error) {
|
||||||
|
var zero Claims
|
||||||
|
if !strings.HasPrefix(tok, "gc2.") {
|
||||||
|
return zero, errors.New("bad prefix")
|
||||||
|
}
|
||||||
|
parts := strings.Split(tok, ".")
|
||||||
|
if len(parts) != 3 {
|
||||||
|
return zero, errors.New("bad parts")
|
||||||
|
}
|
||||||
|
body, err := base64.RawURLEncoding.DecodeString(parts[1])
|
||||||
|
if err != nil {
|
||||||
|
return zero, err
|
||||||
|
}
|
||||||
|
want, err := base64.RawURLEncoding.DecodeString(parts[2])
|
||||||
|
if err != nil {
|
||||||
|
return zero, err
|
||||||
|
}
|
||||||
|
mac := hmac.New(sha256.New, signKey)
|
||||||
|
mac.Write(body)
|
||||||
|
if !hmac.Equal(want, mac.Sum(nil)) {
|
||||||
|
return zero, errors.New("bad sig")
|
||||||
|
}
|
||||||
|
var c Claims
|
||||||
|
if err := json.Unmarshal(body, &c); err != nil {
|
||||||
|
return zero, err
|
||||||
|
}
|
||||||
|
t := now.Unix()
|
||||||
|
if c.Nbf != 0 && t < c.Nbf {
|
||||||
|
return zero, errors.New("nbf")
|
||||||
|
}
|
||||||
|
if t > c.Exp {
|
||||||
|
return zero, errors.New("expired")
|
||||||
|
}
|
||||||
|
return c, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func AccountIDFromPub(raw []byte) string {
|
||||||
|
// acc_<first32 hex of sha256(pub)>
|
||||||
|
sum := sha256.Sum256(raw)
|
||||||
|
return "acc_" + hex.EncodeToString(sum[:16])
|
||||||
|
}
|
@@ -1,123 +1,63 @@
|
|||||||
package index
|
package index
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"errors"
|
||||||
"encoding/json"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"sort"
|
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type opType string
|
// Entry is the minimal metadata we expose to clients.
|
||||||
|
|
||||||
const (
|
|
||||||
OpPut opType = "put"
|
|
||||||
OpDel opType = "del"
|
|
||||||
)
|
|
||||||
|
|
||||||
type record struct {
|
|
||||||
Op opType `json:"op"`
|
|
||||||
Hash string `json:"hash"`
|
|
||||||
Bytes int64 `json:"bytes,omitempty"`
|
|
||||||
StoredAt time.Time `json:"stored_at,omitempty"`
|
|
||||||
Private bool `json:"private,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type Entry struct {
|
type Entry struct {
|
||||||
Hash string `json:"hash"`
|
Hash string `json:"hash"`
|
||||||
Bytes int64 `json:"bytes"`
|
Bytes int64 `json:"bytes"`
|
||||||
StoredAt time.Time `json:"stored_at"`
|
StoredAt string `json:"stored_at"` // RFC3339Nano
|
||||||
Private bool `json:"private"`
|
Private bool `json:"private"` // true if client marked encrypted
|
||||||
|
CreatorTZ string `json:"creator_tz,omitempty"` // optional IANA TZ from client
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Index is an in-memory map from hash -> Entry, safe for concurrent use.
|
||||||
type Index struct {
|
type Index struct {
|
||||||
path string
|
mu sync.RWMutex
|
||||||
mu sync.Mutex
|
m map[string]Entry
|
||||||
}
|
}
|
||||||
|
|
||||||
func New(baseDir string) *Index {
|
func New() *Index {
|
||||||
return &Index{path: filepath.Join(baseDir, "index.jsonl")}
|
return &Index{m: make(map[string]Entry)}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (i *Index) AppendPut(e Entry) error {
|
func (ix *Index) Put(e Entry) error {
|
||||||
i.mu.Lock()
|
if e.Hash == "" {
|
||||||
defer i.mu.Unlock()
|
return errors.New("empty hash")
|
||||||
return appendRec(i.path, record{
|
}
|
||||||
Op: OpPut,
|
ix.mu.Lock()
|
||||||
Hash: e.Hash,
|
ix.m[e.Hash] = e
|
||||||
Bytes: e.Bytes,
|
ix.mu.Unlock()
|
||||||
StoredAt: e.StoredAt,
|
return nil
|
||||||
Private: e.Private,
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (i *Index) AppendDelete(hash string) error {
|
func (ix *Index) Delete(hash string) error {
|
||||||
i.mu.Lock()
|
if hash == "" {
|
||||||
defer i.mu.Unlock()
|
return errors.New("empty hash")
|
||||||
return appendRec(i.path, record{Op: OpDel, Hash: hash})
|
}
|
||||||
|
ix.mu.Lock()
|
||||||
|
delete(ix.m, hash)
|
||||||
|
ix.mu.Unlock()
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func appendRec(path string, r record) error {
|
func (ix *Index) Get(hash string) (Entry, bool) {
|
||||||
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
|
ix.mu.RLock()
|
||||||
return err
|
e, ok := ix.m[hash]
|
||||||
}
|
ix.mu.RUnlock()
|
||||||
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0o644)
|
return e, ok
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
enc := json.NewEncoder(f)
|
|
||||||
return enc.Encode(r)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (i *Index) Snapshot() ([]Entry, error) {
|
// All returns an unsorted copy of all entries.
|
||||||
i.mu.Lock()
|
func (ix *Index) All() []Entry {
|
||||||
defer i.mu.Unlock()
|
ix.mu.RLock()
|
||||||
|
out := make([]Entry, 0, len(ix.m))
|
||||||
f, err := os.Open(i.path)
|
for _, v := range ix.m {
|
||||||
if os.IsNotExist(err) {
|
out = append(out, v)
|
||||||
return nil, nil
|
|
||||||
}
|
}
|
||||||
if err != nil {
|
ix.mu.RUnlock()
|
||||||
return nil, err
|
return out
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
sc := bufio.NewScanner(f)
|
|
||||||
sc.Buffer(make([]byte, 0, 64*1024), 4*1024*1024)
|
|
||||||
|
|
||||||
type state struct {
|
|
||||||
Entry Entry
|
|
||||||
Deleted bool
|
|
||||||
}
|
|
||||||
m := make(map[string]state)
|
|
||||||
for sc.Scan() {
|
|
||||||
var rec record
|
|
||||||
if err := json.Unmarshal(sc.Bytes(), &rec); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
switch rec.Op {
|
|
||||||
case OpPut:
|
|
||||||
m[rec.Hash] = state{Entry: Entry{
|
|
||||||
Hash: rec.Hash, Bytes: rec.Bytes, StoredAt: rec.StoredAt, Private: rec.Private,
|
|
||||||
}}
|
|
||||||
case OpDel:
|
|
||||||
s := m[rec.Hash]
|
|
||||||
s.Deleted = true
|
|
||||||
m[rec.Hash] = s
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err := sc.Err(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
var out []Entry
|
|
||||||
for _, s := range m {
|
|
||||||
if !s.Deleted && s.Entry.Hash != "" {
|
|
||||||
out = append(out, s.Entry)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
sort.Slice(out, func(i, j int) bool { return out[i].StoredAt.After(out[j].StoredAt) })
|
|
||||||
return out, nil
|
|
||||||
}
|
}
|
||||||
|
240
internal/storage/fs.go
Normal file
240
internal/storage/fs.go
Normal file
@@ -0,0 +1,240 @@
|
|||||||
|
package storage
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"io"
|
||||||
|
"io/fs"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type FSStore struct {
|
||||||
|
root string
|
||||||
|
objects string
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewFS(dir string) (*FSStore, error) {
|
||||||
|
if dir == "" {
|
||||||
|
return nil, errors.New("empty storage dir")
|
||||||
|
}
|
||||||
|
o := filepath.Join(dir, "objects")
|
||||||
|
if err := os.MkdirAll(o, 0o755); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &FSStore{root: dir, objects: o}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *FSStore) pathFlat(hash string) (string, error) {
|
||||||
|
if hash == "" {
|
||||||
|
return "", errors.New("empty hash")
|
||||||
|
}
|
||||||
|
return filepath.Join(s.objects, hash), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isHexHash(name string) bool {
|
||||||
|
if len(name) != 64 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for i := 0; i < 64; i++ {
|
||||||
|
c := name[i]
|
||||||
|
if !((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f')) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *FSStore) findBlobPath(hash string) (string, error) {
|
||||||
|
if hash == "" {
|
||||||
|
return "", errors.New("empty hash")
|
||||||
|
}
|
||||||
|
// 1) flat
|
||||||
|
if p, _ := s.pathFlat(hash); fileExists(p) {
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
// 2) objects/<hash>/{blob,data,content}
|
||||||
|
dir := filepath.Join(s.objects, hash)
|
||||||
|
for _, cand := range []string{"blob", "data", "content"} {
|
||||||
|
p := filepath.Join(dir, cand)
|
||||||
|
if fileExists(p) {
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// 3) objects/<hash>/<single file>
|
||||||
|
if st, err := os.Stat(dir); err == nil && st.IsDir() {
|
||||||
|
ents, _ := os.ReadDir(dir)
|
||||||
|
var picked string
|
||||||
|
var pickedMod time.Time
|
||||||
|
for _, de := range ents {
|
||||||
|
if de.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
p := filepath.Join(dir, de.Name())
|
||||||
|
fi, err := os.Stat(p)
|
||||||
|
if err == nil && fi.Mode().IsRegular() {
|
||||||
|
if picked == "" || fi.ModTime().After(pickedMod) {
|
||||||
|
picked, pickedMod = p, fi.ModTime()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if picked != "" {
|
||||||
|
return picked, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// 4) two-level prefix objects/aa/<hash>
|
||||||
|
if len(hash) >= 2 {
|
||||||
|
p := filepath.Join(s.objects, hash[:2], hash)
|
||||||
|
if fileExists(p) {
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// 5) recursive search
|
||||||
|
var best string
|
||||||
|
var bestMod time.Time
|
||||||
|
_ = filepath.WalkDir(s.objects, func(p string, d fs.DirEntry, err error) error {
|
||||||
|
if err != nil || d.IsDir() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
base := filepath.Base(p)
|
||||||
|
if base == hash {
|
||||||
|
best = p
|
||||||
|
return fs.SkipDir
|
||||||
|
}
|
||||||
|
parent := filepath.Base(filepath.Dir(p))
|
||||||
|
if parent == hash {
|
||||||
|
if fi, err := os.Stat(p); err == nil && fi.Mode().IsRegular() {
|
||||||
|
if best == "" || fi.ModTime().After(bestMod) {
|
||||||
|
best, bestMod = p, fi.ModTime()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if best != "" {
|
||||||
|
return best, nil
|
||||||
|
}
|
||||||
|
return "", os.ErrNotExist
|
||||||
|
}
|
||||||
|
|
||||||
|
func fileExists(p string) bool {
|
||||||
|
fi, err := os.Stat(p)
|
||||||
|
return err == nil && fi.Mode().IsRegular()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *FSStore) Put(hash string, r io.Reader) error {
|
||||||
|
p, err := s.pathFlat(hash)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := os.MkdirAll(filepath.Dir(p), 0o755); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
tmp := p + ".tmp"
|
||||||
|
f, err := os.Create(tmp)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_, werr := io.Copy(f, r)
|
||||||
|
cerr := f.Close()
|
||||||
|
if werr != nil {
|
||||||
|
_ = os.Remove(tmp)
|
||||||
|
return werr
|
||||||
|
}
|
||||||
|
if cerr != nil {
|
||||||
|
_ = os.Remove(tmp)
|
||||||
|
return cerr
|
||||||
|
}
|
||||||
|
return os.Rename(tmp, p)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *FSStore) Get(hash string) (io.ReadCloser, int64, error) {
|
||||||
|
p, err := s.findBlobPath(hash)
|
||||||
|
if err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
f, err := os.Open(p)
|
||||||
|
if err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
st, err := f.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return f, 0, nil
|
||||||
|
}
|
||||||
|
return f, st.Size(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *FSStore) Delete(hash string) error {
|
||||||
|
if p, _ := s.pathFlat(hash); fileExists(p) {
|
||||||
|
if err := os.Remove(p); err == nil || errors.Is(err, os.ErrNotExist) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
dir := filepath.Join(s.objects, hash)
|
||||||
|
for _, cand := range []string{"blob", "data", "content"} {
|
||||||
|
p := filepath.Join(dir, cand)
|
||||||
|
if fileExists(p) {
|
||||||
|
if err := os.Remove(p); err == nil || errors.Is(err, os.ErrNotExist) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(hash) >= 2 {
|
||||||
|
p := filepath.Join(s.objects, hash[:2], hash)
|
||||||
|
if fileExists(p) {
|
||||||
|
if err := os.Remove(p); err == nil || errors.Is(err, os.ErrNotExist) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if p, err := s.findBlobPath(hash); err == nil {
|
||||||
|
if err := os.Remove(p); err == nil || errors.Is(err, os.ErrNotExist) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *FSStore) Walk(fn func(hash string, size int64, mod time.Time) error) error {
|
||||||
|
type rec struct {
|
||||||
|
size int64
|
||||||
|
mod time.Time
|
||||||
|
}
|
||||||
|
agg := make(map[string]rec)
|
||||||
|
_ = filepath.WalkDir(s.objects, func(p string, d fs.DirEntry, err error) error {
|
||||||
|
if err != nil || d.IsDir() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
fi, err := os.Stat(p)
|
||||||
|
if err != nil || !fi.Mode().IsRegular() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
base := filepath.Base(p)
|
||||||
|
if isHexHash(base) {
|
||||||
|
if r, ok := agg[base]; !ok || fi.ModTime().After(r.mod) {
|
||||||
|
agg[base] = rec{fi.Size(), fi.ModTime()}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
parent := filepath.Base(filepath.Dir(p))
|
||||||
|
if isHexHash(parent) {
|
||||||
|
if r, ok := agg[parent]; !ok || fi.ModTime().After(r.mod) {
|
||||||
|
agg[parent] = rec{fi.Size(), fi.ModTime()}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if len(base) == 64 && isHexHash(strings.ToLower(base)) {
|
||||||
|
if r, ok := agg[base]; !ok || fi.ModTime().After(r.mod) {
|
||||||
|
agg[base] = rec{fi.Size(), fi.ModTime()}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
for h, r := range agg {
|
||||||
|
if err := fn(h, r.size, r.mod); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
@@ -1,95 +0,0 @@
|
|||||||
package storage
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/sha256"
|
|
||||||
"encoding/hex"
|
|
||||||
"errors"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
)
|
|
||||||
|
|
||||||
type FSStore struct {
|
|
||||||
root string
|
|
||||||
maxObjectB int64
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewFSStore(root string, maxKB int) (*FSStore, error) {
|
|
||||||
if root == "" {
|
|
||||||
root = "./data/objects"
|
|
||||||
}
|
|
||||||
if err := os.MkdirAll(root, 0o755); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &FSStore{root: root, maxObjectB: int64(maxKB) * 1024}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *FSStore) Put(r io.Reader) (string, int64, error) {
|
|
||||||
h := sha256.New()
|
|
||||||
tmp := filepath.Join(s.root, ".tmp")
|
|
||||||
_ = os.MkdirAll(tmp, 0o755)
|
|
||||||
f, err := os.CreateTemp(tmp, "obj-*")
|
|
||||||
if err != nil {
|
|
||||||
return "", 0, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
var n int64
|
|
||||||
buf := make([]byte, 32*1024)
|
|
||||||
for {
|
|
||||||
m, er := r.Read(buf)
|
|
||||||
if m > 0 {
|
|
||||||
n += int64(m)
|
|
||||||
if s.maxObjectB > 0 && n > s.maxObjectB {
|
|
||||||
return "", 0, errors.New("object too large")
|
|
||||||
}
|
|
||||||
_, _ = h.Write(buf[:m])
|
|
||||||
if _, werr := f.Write(buf[:m]); werr != nil {
|
|
||||||
return "", 0, werr
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if er == io.EOF {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if er != nil {
|
|
||||||
return "", 0, er
|
|
||||||
}
|
|
||||||
}
|
|
||||||
sum := hex.EncodeToString(h.Sum(nil))
|
|
||||||
dst := filepath.Join(s.root, sum[:2], sum[2:4], sum)
|
|
||||||
if err := os.MkdirAll(filepath.Dir(dst), 0o755); err != nil {
|
|
||||||
return "", 0, err
|
|
||||||
}
|
|
||||||
if err := os.Rename(f.Name(), dst); err != nil {
|
|
||||||
return "", 0, err
|
|
||||||
}
|
|
||||||
return sum, n, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *FSStore) pathFor(hash string) string {
|
|
||||||
return filepath.Join(s.root, hash[:2], hash[2:4], hash)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *FSStore) Get(hash string) (string, error) {
|
|
||||||
if len(hash) < 4 {
|
|
||||||
return "", os.ErrNotExist
|
|
||||||
}
|
|
||||||
p := s.pathFor(hash)
|
|
||||||
if _, err := os.Stat(p); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return p, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *FSStore) Delete(hash string) error {
|
|
||||||
if len(hash) < 4 {
|
|
||||||
return os.ErrNotExist
|
|
||||||
}
|
|
||||||
p := s.pathFor(hash)
|
|
||||||
if err := os.Remove(p); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
_ = os.Remove(filepath.Dir(p))
|
|
||||||
_ = os.Remove(filepath.Dir(filepath.Dir(p)))
|
|
||||||
return nil
|
|
||||||
}
|
|
4
testdata/index.jsonl
vendored
Normal file
4
testdata/index.jsonl
vendored
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
{"op":"put","hash":"a008a13ade86edbd77f5c0fcfcf35bd295c93069be42fdbd46bc65b392ddf5fb","bytes":110,"stored_at":"2025-08-22T03:00:00Z"}
|
||||||
|
{"op":"put","hash":"9628e2adcd7a5e820fbdbe075027ac0ad78ef1a7a501971c2048bc5e5436b891","bytes":105,"stored_at":"2025-08-22T03:00:00Z","private":true}
|
||||||
|
{"op":"put","hash":"6a166437b9988bd11e911375f3ca1b4cd10b7db9a32812409c6d79a0753dd973","bytes":98,"stored_at":"2025-08-22T03:00:00Z"}
|
||||||
|
{"op":"put","hash":"f452402fadb6608bd6f9b613a1d58234e2135f045ea29262574e3e4b1e5f7292","bytes":46,"stored_at":"2025-08-22T03:00:00Z"}
|
Reference in New Issue
Block a user