5 Commits

Author SHA1 Message Date
0ff358552c Finished updating the readme 2025-08-22 12:40:09 -04:00
720c7e0b52 Updated the README
Added new security layers
2025-08-22 12:39:51 -04:00
fb7428064f Fixed the Discord SSO somewhat
Fixed FS system
Added TZ options
2025-08-22 12:00:58 -04:00
7ff3f43c93 Troubleshooting the cloudflare and the api not communicating 2025-08-21 22:57:48 -04:00
82eed71d7e Doing some testing to make sure that Cloudflare works with the app 2025-08-21 22:24:50 -04:00
20 changed files with 1834 additions and 858 deletions

View File

@@ -31,7 +31,7 @@ FROM gcr.io/distroless/base-debian12:nonroot
WORKDIR /app
COPY --from=build /out/greencoast-shard /app/greencoast-shard
COPY configs/shard.sample.yaml /app/shard.yaml
COPY client /app/client
COPY client/ /opt/greencoast/client/
VOLUME ["/var/lib/greencoast"]
EXPOSE 8080 8081 8443 9443
USER nonroot:nonroot

236
README.md
View File

@@ -1,24 +1,224 @@
# GreenCoast — Privacy-First, Shardable Social (Dockerized)
# GreenCoast
**Goal:** A BlueSky-like experience with **shards**, **zero-trust**, **no data collection**, **E2EE**, and easy self-hosting — from x86_64 down to **Raspberry Pi Zero**.
License: **The Unlicense** (public-domain equivalent).
This repo contains a minimal, working **shard**: an append-only object API with zero-data-collection defaults. Its structured to evolve into full federation, E2EE, and client apps, while keeping Pi Zero as a supported host.
A privacy-first, shardable social backend + minimalist client. **Zero PII**, **zero passwords**, optional **E2EE per post**, and **public-key accounts**. Includes **DPoP-style proof-of-possession**, **Discord SSO with PKCE**, and a tiny static client.
---
## Quick Start (Laptop / Dev)
## Features
**Requirements:** Docker + Compose v2
- **Zero-trust by design**: server stores no emails or passwords.
- **Accounts = public keys** (Ed25519 or P-256). No usernames required.
- **Proof-of-possession (PoP)** on every authenticated API call.
- **Short-lived tokens** (HMAC “gc2”) bound to device keys.
- **Shardable storage** (mTLS or signed shard requests).
- **No fingerprinting**: no IP/UA logs; coarse timestamps optional.
- **Static client** with strong CSP; optional E2EE per post.
- **Discord SSO (PKCE)** as an *optional* convenience.
- **Filesystem storage** supports both **flat** and **nested** object layouts.
---
## Architecture (brief)
- **Shard**: stateless API + local FS object store + in-memory index.
- **Client**: static files (HTML/JS/CSS) served by the shard or any static host.
- **Identity**: device key (P-256/Ed25519) or passkey; server mints short-lived **gc2** tokens bound to the device key (`cnf` claim).
- **Privacy**: objects can be plaintext (public) or client-encrypted (private).
---
## Security posture
- **Zero-trust**: no passwords/emails; optional SSO is *linking*, not source-of-truth.
- **DPoP-style PoP** on requests:
- Client sends:
- `Authorization: Bearer gc2.…`
- `X-GC-Key: p256:<base64-raw>` (or `ed25519:…`)
- `X-GC-TS: <unix seconds>`
- `X-GC-Proof: sig( METHOD "\n" URL "\n" TS "\n" SHA256(body) )`
- Server verifies `gc2` signature, key binding (`cnf`), timestamp window, and replay cache.
- **Replay protection**: 10-minute proof cache.
- **No fingerprinting/logging**: no IPs, no UAs.
- **Strict CSP** for client: blocks XSS/token theft.
- **Limits**: request body limits (default 10 MiB), simple per-account rate limiting.
- **Shard↔shard**: mTLS or per-shard signatures with timestamp + replay cache.
---
## Requirements
- Go 1.21+
- Docker (optional)
- A signing key for tokens: `GC_SIGNING_SECRET_HEX` (32+ bytes hex)
- (Optional) Discord OAuth app (Client ID/Secret + redirect URI)
- (Optional) Cloudflare Tunnel or other TLS reverse proxy
---
## Environment variables
GC_HTTP_ADDR=:9080
GC_HTTPS_ADDR= # optional
GC_TLS_CERT= # optional
GC_TLS_KEY= # optional
GC_STATIC_ADDR=:9082
GC_STATIC_DIR=/opt/greencoast/client
GC_DATA_DIR=/var/lib/greencoast
GC_ZERO_TRUST=true
GC_COARSE_TS=false
GC_SIGNING_SECRET_HEX=<64+ hex chars> # required for gc2 tokens
GC_REQUIRE_POP=true # default true; set false for first-run
# Dev convenience (testing only; disable for production)
GC_DEV_ALLOW_UNAUTH=false
GC_DEV_BEARER=
# Discord SSO (optional)
GC_DISCORD_CLIENT_ID=
GC_DISCORD_CLIENT_SECRET=
GC_DISCORD_REDIRECT_URI=https://greencoast.example.com/auth-callback.html
---
## Quickstart (Docker)
Minimal compose for local testing (PoP disabled + dev unauth allowed for first run):
services:
shard-test:
build: .
environment:
- GC_HTTP_ADDR=:9080
- GC_STATIC_ADDR=:9082
- GC_STATIC_DIR=/opt/greencoast/client
- GC_DATA_DIR=/var/lib/greencoast
- GC_ZERO_TRUST=true
- GC_SIGNING_SECRET_HEX=7f6e1a0f2b4d7e3a... # replace with your secret
- GC_REQUIRE_POP=false # easier first-run
- GC_DEV_ALLOW_UNAUTH=true
volumes:
- ./testdata:/var/lib/greencoast
- ./client:/opt/greencoast/client:ro
ports:
- "9080:9080"
- "9082:9082"
Open `http://localhost:9082` → set the Shard URL (`http://localhost:9080`) → publish a test post.
When ready, **turn PoP on** by removing `GC_REQUIRE_POP=false` and disabling `GC_DEV_ALLOW_UNAUTH`.
---
## Cloudflare Tunnel example
ingress:
- hostname: greencoast.example.com
service: http://shard-test:9082
- hostname: api-gc.greencoast.example.com
service: http://shard-test:9080
- service: http_status:404
Use “Full (strict)” TLS and ensure your cert covers both hosts.
---
## Client usage
- **Shard URL**: set it in the top “Connect” section (or use `?api=` query or `<meta name="gc-api-base">`).
- **Device key sign-in (no OAuth)**:
1) Client generates/stores a P-256 device key in the browser.
2) Client calls `/v1/auth/key/challenge` then `/v1/auth/key/verify` to obtain a **gc2** token bound to that key.
- **Discord SSO (optional)**:
- Requires `GC_DISCORD_CLIENT_*` env vars and a valid `GC_DISCORD_REDIRECT_URI`.
- Uses PKCE (`S256`) and binds the minted **gc2** token to the device key presented at `/start`.
---
## API (overview)
- `GET /healthz` liveness
- `PUT /v1/object` upload blob (headers: optional `X-GC-Private: 1`, `X-GC-TZ`)
- `GET /v1/object/{hash}` download blob
- `DELETE /v1/object/{hash}` delete blob
- `GET /v1/index` list indexed entries (latest first)
- `GET /v1/index/stream` SSE updates
- `POST /v1/admin/reindex` rebuild index from disk
- **Auth**
- `POST /v1/auth/key/challenge``{nonce, exp}`
- `POST /v1/auth/key/verify` `{nonce, alg, pub, sig}``{bearer, sub, exp}`
- `POST /v1/auth/discord/start` (requires `X-GC-3P-Assent: 1` and `X-GC-Key`)
- `GET /v1/auth/discord/callback` → redirects with `#bearer=…`
- **GDPR**
- `GET /v1/gdpr/policy` current data-handling posture
> When `GC_REQUIRE_POP=true`, all authenticated endpoints require PoP headers.
### PoP header format (pseudocode)
Authorization: Bearer gc2.<claims>.<sig>
X-GC-Key: p256:<base64-raw> # or ed25519:<base64-raw>
X-GC-TS: <unix seconds>
X-GC-Proof: base64(
Sign_device_key(
UPPER(METHOD) + "\n" + URL + "\n" + X-GC-TS + "\n" + SHA256(body)
)
)
---
## Storage layout & migration
- **Writes** are flat: `objects/<hash>`
- **Reads** (and reindex) also support:
- `objects/<hash>/blob|data|content`
- `objects/<hash>/<single file>`
- `objects/<prefix>/<hash>` (two-level prefix)
- To **restore** data into a fresh container:
1) Mount your objects at `/var/lib/greencoast/objects`
2) Call `POST /v1/admin/reindex` (with auth+PoP or enable dev unauth briefly)
---
## Reindex examples
Unauth (dev only):
curl -X POST https://api-gc.yourdomain/v1/admin/reindex
With bearer + PoP (placeholders):
curl -X POST https://api-gc.yourdomain/v1/admin/reindex ^
-H "Authorization: Bearer <gc2_token>" ^
-H "X-GC-Key: p256:<base64raw>" ^
-H "X-GC-TS: <unix>" ^
-H "X-GC-Proof: <base64sig>"
---
## Hardening checklist (prod)
- Set `GC_REQUIRE_POP=true`, remove dev bypass.
- Keep access token TTL ≤ 8h; rotate signing key periodically.
- Static client served with strong CSP (already enabled).
- Containers run non-root, read-only FS, `no-new-privileges`, `cap_drop: ["ALL"]`.
- Edge WAF/rate limits; 10 MiB default request cap (tunable).
- Commit `go.sum`; run `go mod verify` in CI.
---
## GDPR
- Server stores **no PII** (no emails, no IP/UA logs).
- Timestamps are UTC (or coarse UTC if enabled).
- `/v1/gdpr/policy` exposes current posture.
- Roadmap: `/v1/gdpr/export` and `/v1/gdpr/delete` to enumerate/remove blobs signed by a given key.
---
## License
This project is licensed under **The Unlicense**. See `LICENSE` for details.
```bash
git clone <your repo> greencoast
cd greencoast
cp .env.example .env
docker compose -f docker-compose.dev.yml up --build
# Health:
curl -s http://localhost:8080/healthz
# Put an object (dev mode allows unauthenticated PUT/GET):
curl -s -X PUT --data-binary @README.md http://localhost:8080/v1/object
# -> {"ok":true,"hash":"<sha256>",...}
curl -s http://localhost:8080/v1/object/<sha256> | head

View File

@@ -17,73 +17,144 @@ const els = {
const LS_KEY = "gc_client_config_v1";
const POSTS_KEY = "gc_posts_index_v1";
const DEVKEY_KEY = "gc_device_key_v1"; // stores p256 private/public (pkcs8/spki b64)
const cfg = loadConfig(); applyConfig(); checkHealth(); syncIndex(); sse();
function defaultApiBase() {
try {
const qs = new URLSearchParams(window.location.search);
const qApi = qs.get("api");
if (qApi) return qApi.replace(/\/+$/, "");
} catch {}
const m = document.querySelector('meta[name="gc-api-base"]');
if (m && m.content) return m.content.replace(/\/+$/, "");
try {
const u = new URL(window.location.href);
const proto = u.protocol;
const host = u.hostname;
const portStr = u.port;
const bracketHost = host.includes(":") ? `[${host}]` : host;
const port = portStr ? parseInt(portStr, 10) : null;
let apiPort = port;
if (port === 8082) apiPort = 8080;
else if (port === 9082) apiPort = 9080;
else if (port) apiPort = Math.max(1, port - 2);
return apiPort ? `${proto}//${bracketHost}:${apiPort}` : `${proto}//${bracketHost}`;
} catch {
return window.location.origin.replace(/\/+$/, "");
}
}
const cfg = loadConfig(); applyConfig(); (async () => {
await ensureDeviceKey();
await checkHealth(); await syncIndex(); sse();
})();
els.saveConn.onclick = async () => {
const c = { url: norm(els.shardUrl.value), bearer: els.bearer.value.trim(), passphrase: els.passphrase.value };
saveConfig(c); await checkHealth(); await syncIndex(); sse(true);
saveConfig(c);
await checkHealth(); await syncIndex(); sse(true);
};
els.publish.onclick = publish;
els.discordStart.onclick = discordStart;
// -------- local state helpers --------
function loadConfig(){ try { return JSON.parse(localStorage.getItem(LS_KEY)) ?? {}; } catch { return {}; } }
function saveConfig(c){ localStorage.setItem(LS_KEY, JSON.stringify(c)); Object.assign(cfg, c); }
function getPosts(){ try { return JSON.parse(localStorage.getItem(POSTS_KEY)) ?? []; } catch { return []; } }
function setPosts(v){ localStorage.setItem(POSTS_KEY, JSON.stringify(v)); renderPosts(); }
function norm(u){ return (u||"").replace(/\/+$/,""); }
function applyConfig(){ els.shardUrl.value = cfg.url ?? location.origin; els.bearer.value = cfg.bearer ?? ""; els.passphrase.value = cfg.passphrase ?? ""; }
function applyConfig(){ els.shardUrl.value = cfg.url ?? defaultApiBase(); els.bearer.value = cfg.bearer ?? ""; els.passphrase.value = cfg.passphrase ?? ""; }
function msg(t, err=false){ els.publishStatus.textContent=t; els.publishStatus.style.color = err ? "#ff6b6b" : "#8b949e"; }
// Prefer session bearer
function getBearer() { return sessionStorage.getItem("gc_bearer") || cfg.bearer || ""; }
// -------- device key (P-256) + PoP --------
async function ensureDeviceKey() {
try {
const stored = JSON.parse(localStorage.getItem(DEVKEY_KEY) || "null");
if (stored && stored.priv && stored.pub) return;
} catch {}
const kp = await crypto.subtle.generateKey({ name: "ECDSA", namedCurve: "P-256" }, true, ["sign", "verify"]);
const pkcs8 = await crypto.subtle.exportKey("pkcs8", kp.privateKey);
const rawPub = await crypto.subtle.exportKey("raw", kp.publicKey); // 65-byte uncompressed
const b64pk = b64(rawPub);
const b64sk = b64(pkcs8);
localStorage.setItem(DEVKEY_KEY, JSON.stringify({ priv: b64sk, pub: b64pk, alg: "p256" }));
}
async function getDevicePriv() {
const s = JSON.parse(localStorage.getItem(DEVKEY_KEY) || "{}");
if (s.alg !== "p256") throw new Error("unsupported alg");
const pkcs8 = ub64(s.priv);
return crypto.subtle.importKey("pkcs8", pkcs8, { name: "ECDSA", namedCurve: "P-256" }, false, ["sign"]);
}
function getDevicePubHdr() {
const s = JSON.parse(localStorage.getItem(DEVKEY_KEY) || "{}");
if (!s.pub) return "";
return s.alg === "p256" ? ("p256:" + s.pub) : "";
}
async function popHeaders(method, url, body) {
const ts = Math.floor(Date.now()/1000).toString();
const pub = getDevicePubHdr();
const digest = await sha256Hex(body || new Uint8Array());
const msg = (method.toUpperCase()+"\n"+url+"\n"+ts+"\n"+digest);
const priv = await getDevicePriv();
const sig = await crypto.subtle.sign({ name: "ECDSA", hash: "SHA-256" }, priv, new TextEncoder().encode(msg));
return { "X-GC-Key": pub, "X-GC-TS": ts, "X-GC-Proof": b64(new Uint8Array(sig)) };
}
async function fetchAPI(path, opts = {}, bodyBytes) {
if (!cfg.url) throw new Error("Set shard URL first.");
const url = cfg.url + path;
const method = (opts.method || "GET").toUpperCase();
const headers = Object.assign({}, opts.headers || {});
const bearer = getBearer();
if (bearer) headers["Authorization"] = "Bearer " + bearer;
const pop = await popHeaders(method, url, bodyBytes);
Object.assign(headers, pop);
const init = Object.assign({}, opts, { method, headers, body: opts.body });
const r = await fetch(url, init);
return r;
}
// -------- health, index, sse --------
async function checkHealth() {
if (!cfg.url) return; els.health.textContent = "Checking…";
try { const r = await fetch(cfg.url + "/healthz"); els.health.textContent = r.ok ? "Connected ✔" : `Error: ${r.status}`; }
catch { els.health.textContent = "Not reachable"; }
}
async function publish() {
if (!cfg.url) return msg("Set shard URL first.", true);
const title = els.title.value.trim(); const body = els.body.value; const vis = els.visibility.value;
try {
let blob, enc=false;
if (vis === "private") {
if (!cfg.passphrase) return msg("Set a passphrase for private posts.", true);
const payload = await encryptString(JSON.stringify({ title, body }), cfg.passphrase);
blob = toBlob(payload); enc=true;
} else { blob = toBlob(JSON.stringify({ title, body })); }
const headers = { "Content-Type":"application/octet-stream" };
if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
if (enc) headers["X-GC-Private"] = "1";
const r = await fetch(cfg.url + "/v1/object", { method:"PUT", headers, body: blob });
if (!r.ok) throw new Error(await r.text());
const j = await r.json();
const posts = getPosts();
posts.unshift({ hash:j.hash, title: title || "(untitled)", bytes:j.bytes, ts:j.stored_at, enc });
setPosts(posts);
els.body.value = ""; msg(`Published ${enc?"private":"public"} post. Hash: ${j.hash}`);
} catch(e){ msg("Publish failed: " + (e?.message||e), true); }
const r = await fetch(cfg.url + "/healthz");
els.health.textContent = r.ok ? "Connected ✔" : `Error: ${r.status}`;
} catch { els.health.textContent = "Not reachable"; }
}
function msg(t, err=false){ els.publishStatus.textContent=t; els.publishStatus.style.color = err ? "#ff6b6b" : "#8b949e"; }
async function syncIndex() {
if (!cfg.url) return;
try {
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
const r = await fetch(cfg.url + "/v1/index", { headers });
const r = await fetchAPI("/v1/index");
if (!r.ok) throw new Error("index fetch failed");
const entries = await r.json();
setPosts(entries.map(e => ({ hash:e.hash, title:"(title unknown — fetch)", bytes:e.bytes, ts:e.stored_at, enc:e.private })));
setPosts(entries.map(e => ({ hash:e.hash, title:"(title unknown — fetch)", bytes:e.bytes, ts:e.stored_at, enc:e.private, tz:e.creator_tz })));
} catch(e){ console.warn("index sync failed", e); }
}
let sseCtrl;
function sse(){
function sse(restart){
if (!cfg.url) return;
if (sseCtrl) { sseCtrl.abort(); sseCtrl = undefined; }
sseCtrl = new AbortController();
const url = cfg.url + "/v1/index/stream";
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
const headers = {};
const b = getBearer(); if (b) headers["Authorization"] = "Bearer " + b;
headers["X-GC-Key"] = getDevicePubHdr();
headers["X-GC-TS"] = Math.floor(Date.now()/1000).toString();
headers["X-GC-Proof"] = "dummy"; // server ignores body hash for GET; proof not required for initial request in this demo SSE; if required, switch to EventSource polyfill
fetch(url, { headers, signal: sseCtrl.signal }).then(async resp => {
if (!resp.ok) return;
const reader = resp.body.getReader(); const decoder = new TextDecoder();
@@ -101,7 +172,7 @@ function sse(){
const e = ev.data;
const posts = getPosts();
if (!posts.find(p => p.hash === e.hash)) {
posts.unshift({ hash:e.hash, title:"(title unknown — fetch)", bytes:e.bytes, ts:e.stored_at, enc:e.private });
posts.unshift({ hash:e.hash, title:"(title unknown — fetch)", bytes:e.bytes, ts:e.stored_at, enc:e.private, tz:e.creator_tz });
setPosts(posts);
}
} else if (ev.event === "delete") {
@@ -114,11 +185,39 @@ function sse(){
}).catch(()=>{});
}
// -------- actions --------
async function publish() {
if (!cfg.url) return msg("Set shard URL first.", true);
const title = els.title.value.trim(); const body = els.body.value; const vis = els.visibility.value;
try {
let blob, enc=false;
if (vis === "private") {
if (!cfg.passphrase) return msg("Set a passphrase for private posts.", true);
const payload = await encryptString(JSON.stringify({ title, body }), cfg.passphrase);
blob = toBlob(payload); enc=true;
} else { blob = toBlob(JSON.stringify({ title, body })); }
const tz = Intl.DateTimeFormat().resolvedOptions().timeZone || "";
const headers = { "Content-Type":"application/octet-stream", "X-GC-TZ": tz };
const bearer = getBearer(); if (bearer) headers["Authorization"] = "Bearer " + bearer;
if (enc) headers["X-GC-Private"] = "1";
const bodyBytes = new Uint8Array(await blob.arrayBuffer());
const pop = await popHeaders("PUT", cfg.url + "/v1/object", bodyBytes);
Object.assign(headers, pop);
const r = await fetch(cfg.url + "/v1/object", { method:"PUT", headers, body: blob });
if (!r.ok) throw new Error(await r.text());
const j = await r.json();
const posts = getPosts();
posts.unshift({ hash:j.hash, title: title || "(untitled)", bytes:j.bytes, ts:j.stored_at, enc:j.private, tz:j.creator_tz });
setPosts(posts);
els.body.value = ""; msg(`Published ${enc?"private":"public"} post. Hash: ${j.hash}`);
} catch(e){ msg("Publish failed: " + (e?.message||e), true); }
}
async function viewPost(p, pre) {
pre.textContent = "Loading…";
try {
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
const r = await fetch(cfg.url + "/v1/object/" + p.hash, { headers });
const r = await fetchAPI("/v1/object/" + p.hash);
if (!r.ok) throw new Error("fetch failed " + r.status);
const buf = new Uint8Array(await r.arrayBuffer());
let text;
@@ -134,8 +233,7 @@ async function viewPost(p, pre) {
}
async function saveBlob(p) {
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
const r = await fetch(cfg.url + "/v1/object/" + p.hash, { headers });
const r = await fetchAPI("/v1/object/" + p.hash);
if (!r.ok) return alert("download failed " + r.status);
const b = await r.blob();
const a = document.createElement("a"); a.href = URL.createObjectURL(b);
@@ -143,28 +241,48 @@ async function saveBlob(p) {
}
async function delServer(p) {
const headers = {}; if (cfg.bearer) headers["Authorization"] = "Bearer " + cfg.bearer;
if (!confirm("Delete blob from server by hash?")) return;
const r = await fetch(cfg.url + "/v1/object/" + p.hash, { method:"DELETE", headers });
const r = await fetchAPI("/v1/object/" + p.hash, { method:"DELETE" });
if (!r.ok) return alert("delete failed " + r.status);
setPosts(getPosts().filter(x=>x.hash!==p.hash));
}
async function discordStart() {
if (!cfg.url) { alert("Set shard URL first."); return; }
const r = await fetch(cfg.url + "/v1/auth/discord/start", { headers: { "X-GC-3P-Assent":"1" }});
const headers = { "X-GC-3P-Assent":"1", "X-GC-Key": getDevicePubHdr() };
const r = await fetch(cfg.url + "/v1/auth/discord/start", { headers });
if (!r.ok) { alert("Discord SSO not available"); return; }
const j = await r.json();
location.href = j.url;
}
// Optional: Key-based login (no OAuth)
async function signInWithDeviceKey(){
if (!cfg.url) { alert("Set shard URL first."); return; }
const c = await fetch(cfg.url + "/v1/auth/key/challenge", { method:"POST" }).then(r=>r.json());
const msg = "key-verify\n" + c.nonce;
const priv = await getDevicePriv();
const sig = await crypto.subtle.sign({ name:"ECDSA", hash:"SHA-256" }, priv, new TextEncoder().encode(msg));
const body = JSON.stringify({ nonce:c.nonce, alg:"p256", pub: getDevicePubHdr().slice("p256:".length), sig: b64(new Uint8Array(sig)) });
const r = await fetch(cfg.url + "/v1/auth/key/verify", { method:"POST", headers:{ "Content-Type":"application/json" }, body });
if (!r.ok) { alert("Key sign-in failed"); return; }
const j = await r.json();
sessionStorage.setItem("gc_bearer", j.bearer);
const k = "gc_client_config_v1"; const cfg0 = JSON.parse(localStorage.getItem(k) || "{}"); cfg0.bearer = j.bearer; localStorage.setItem(k, JSON.stringify(cfg0));
alert("Signed in");
}
// -------- render --------
function renderPosts() {
const posts = getPosts(); els.posts.innerHTML = "";
for (const p of posts) {
const div = document.createElement("div"); div.className = "post";
const badge = p.enc ? `<span class="badge">private</span>` : `<span class="badge">public</span>`;
const tsLocal = new Date(p.ts).toLocaleString();
const tz = p.tz ? ` · author TZ: ${p.tz}` : "";
div.innerHTML = `
<div class="meta"><code>${p.hash.slice(0,10)}…</code> · ${p.bytes} bytes · ${p.ts} ${badge}</div>
<div class="meta"><code>${p.hash.slice(0,10)}…</code> · ${p.bytes} bytes · ${tsLocal}${tz} ${badge}</div>
<div class="actions">
<button data-act="view">View</button>
<button data-act="save">Save blob</button>
@@ -180,3 +298,27 @@ function renderPosts() {
els.posts.appendChild(div);
}
}
// -------- utils --------
function b64(buf){ return base64url(buf); }
function ub64(s){ return base64urlDecode(s); }
async function sha256Hex(bytes){
const d = await crypto.subtle.digest("SHA-256", bytes);
return Array.from(new Uint8Array(d)).map(b=>b.toString(16).padStart(2,"0")).join("");
}
// minimal base64url helpers
function base64url(buf){
let b = (buf instanceof Uint8Array) ? buf : new Uint8Array(buf);
let str = "";
for (let i=0; i<b.length; i++) str += String.fromCharCode(b[i]);
return btoa(str).replace(/\+/g,"-").replace(/\//g,"_").replace(/=+$/,"");
}
function base64urlDecode(s){
s = s.replace(/-/g,"+").replace(/_/g,"/");
while (s.length % 4) s += "=";
const bin = atob(s); const b = new Uint8Array(bin.length);
for (let i=0;i<bin.length;i++) b[i] = bin.charCodeAt(i);
return b;
}

View File

@@ -1,43 +1,20 @@
<!doctype html>
<html>
<head>
<meta charset="utf-8"/>
<title>GreenCoast — Auth Callback</title>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<style>
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Arial; background:#0b1117; color:#e6edf3; display:flex; align-items:center; justify-content:center; height:100vh; }
.card { background:#0f1621; padding:1rem 1.2rem; border-radius:14px; max-width:560px; }
.muted{ color:#8b949e; }
</style>
</head>
<body>
<div class="card">
<h3>Signing you in…</h3>
<div id="msg" class="muted">Please wait.</div>
</div>
<script type="module">
const params = new URLSearchParams(location.search);
const code = params.get("code");
const origin = location.origin; // shard and client served together
const msg = (t)=>document.getElementById("msg").textContent = t;
async function run() {
if (!code) { msg("Missing 'code' parameter."); return; }
<meta charset="utf-8">
<title>Signing you in…</title>
<script>
(function(){
const hash = new URLSearchParams(location.hash.slice(1));
const bearer = hash.get("bearer");
const next = hash.get("next") || "/";
try {
const r = await fetch(origin + "/v1/auth/discord/callback?assent=1&code=" + encodeURIComponent(code));
if (!r.ok) { msg("Exchange failed: " + r.status); return; }
const j = await r.json();
const key = "gc_client_config_v1";
const cfg = JSON.parse(localStorage.getItem(key) || "{}");
cfg.bearer = j.token;
localStorage.setItem(key, JSON.stringify(cfg));
msg("Success. Redirecting…");
setTimeout(()=>location.href="/", 800);
} catch(e) {
msg("Error: " + (e?.message || e));
}
}
run();
// Prefer sessionStorage; keep localStorage for backward compatibility
if (bearer) sessionStorage.setItem("gc_bearer", bearer);
const k = "gc_client_config_v1";
const cfg = JSON.parse(localStorage.getItem(k) || "{}");
if (bearer) cfg.bearer = bearer;
localStorage.setItem(k, JSON.stringify(cfg));
} catch {}
history.replaceState(null, "", next);
location.href = next;
})();
</script>
</body>
</html>

View File

@@ -4,6 +4,8 @@
<meta charset="utf-8"/>
<title>GreenCoast — Client</title>
<meta name="viewport" content="width=device-width,initial-scale=1"/>
<!-- Force API base for Cloudflare tunneled API -->
<meta name="gc-api-base" content="https://api-gc.fullmooncyberworks.com">
<link rel="stylesheet" href="./styles.css"/>
</head>
<body>
@@ -14,7 +16,7 @@
<h2>Connect</h2>
<div class="row">
<label>Shard URL</label>
<input id="shardUrl" placeholder="http://localhost:8080" />
<input id="shardUrl" placeholder="https://api-gc.fullmooncyberworks.com" />
</div>
<div class="row">
<label>Bearer (optional)</label>
@@ -54,6 +56,9 @@
<label>Body</label>
<textarea id="body" rows="6" placeholder="Write your post..."></textarea>
</div>
<div class="row">
<label><input type="checkbox" id="shareTZ" checked> Include my time zone on this post</label>
</div>
<button id="publish">Publish</button>
<div id="publishStatus" class="muted"></div>
</section>

View File

@@ -1,89 +1,149 @@
package main
import (
"flag"
"log"
"path/filepath"
"net/http"
"os"
"strconv"
"time"
"greencoast/internal/api"
"greencoast/internal/config"
"greencoast/internal/federation"
"greencoast/internal/index"
"greencoast/internal/storage"
)
func main() {
cfgPath := flag.String("config", "shard.yaml", "path to config")
flag.Parse()
cfg, err := config.Load(*cfgPath)
func getenvBool(key string, def bool) bool {
v := os.Getenv(key)
if v == "" {
return def
}
b, err := strconv.ParseBool(v)
if err != nil {
log.Fatalf("config error: %v", err)
return def
}
return b
}
store, err := storage.NewFSStore(cfg.Storage.Path, cfg.Storage.MaxObjectKB)
if err != nil {
log.Fatalf("storage error: %v", err)
}
func staticHeaders(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Security posture for static client
w.Header().Set("Referrer-Policy", "no-referrer")
w.Header().Set("Cross-Origin-Opener-Policy", "same-origin")
w.Header().Set("Cross-Origin-Resource-Policy", "same-site")
w.Header().Set("Permissions-Policy", "camera=(), microphone=(), geolocation=(), interest-cohort=(), browsing-topics=()")
w.Header().Set("X-Frame-Options", "DENY")
w.Header().Set("X-Content-Type-Options", "nosniff")
w.Header().Set("Strict-Transport-Security", "max-age=15552000; includeSubDomains; preload")
dataRoot := filepath.Dir(cfg.Storage.Path)
idx := index.New(dataRoot)
// Strong CSP to block XSS/token theft (enumerate your API host)
w.Header().Set("Content-Security-Policy", "default-src 'self'; base-uri 'none'; object-src 'none'; script-src 'self'; style-src 'self'; img-src 'self' data:; connect-src 'self' https://api-gc.fullmooncyberworks.com; frame-ancestors 'none'")
srv := api.New(
store, idx,
cfg.Privacy.RetainTimestamps == "coarse",
cfg.Security.ZeroTrust,
api.AuthProviders{
SigningSecretHex: cfg.Auth.SigningSecret,
Discord: api.DiscordProvider{
Enabled: cfg.Auth.SSO.Discord.Enabled,
ClientID: cfg.Auth.SSO.Discord.ClientID,
ClientSecret: cfg.Auth.SSO.Discord.ClientSecret,
RedirectURI: cfg.Auth.SSO.Discord.RedirectURI,
},
GoogleEnabled: cfg.Auth.SSO.Google.Enabled,
FacebookEnabled: cfg.Auth.SSO.Facebook.Enabled,
WebAuthnEnabled: cfg.Auth.TwoFactor.WebAuthnEnabled,
TOTPEnabled: cfg.Auth.TwoFactor.TOTPEnabled,
},
)
// Optional: also mount static under API mux (subpath) if you later want that.
// srv.MountStatic(cfg.UI.Path, "/app")
// Start federation mTLS (if enabled)
if cfg.Federation.MTLSEnable {
tlsCfg, err := federation.ServerTLSConfig(
cfg.Federation.CertFile,
cfg.Federation.KeyFile,
cfg.Federation.ClientCAFile,
)
if err != nil {
log.Fatalf("federation tls config error: %v", err)
}
go func() {
if err := srv.ListenMTLS(cfg.Federation.Listen, tlsCfg); err != nil {
log.Fatalf("federation mTLS listener error: %v", err)
}
}()
}
// Start FRONTEND listener (separate port) if enabled
if cfg.UI.Enable && cfg.UI.FrontendHTTP != "" {
go func() {
if err := srv.ListenFrontendHTTP(cfg.UI.FrontendHTTP, cfg.UI.Path, cfg.UI.BaseURL); err != nil {
log.Fatalf("frontend listener error: %v", err)
}
}()
}
// Choose ONE foreground listener for API: HTTPS if enabled, else HTTP.
if cfg.TLS.Enable && cfg.Listen.HTTPS != "" {
log.Fatal(srv.ListenHTTPS(cfg.Listen.HTTPS, cfg.TLS.CertFile, cfg.TLS.KeyFile))
// CORS for assets
w.Header().Set("Access-Control-Allow-Origin", "*")
if r.Method == http.MethodOptions {
w.Header().Set("Access-Control-Allow-Methods", "GET, OPTIONS")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
w.WriteHeader(http.StatusNoContent)
return
}
if cfg.Listen.HTTP == "" {
log.Fatal("no API listeners configured (set listen.http or listen.https)")
next.ServeHTTP(w, r)
})
}
func main() {
httpAddr := os.Getenv("GC_HTTP_ADDR")
if httpAddr == "" {
httpAddr = ":9080"
}
httpsAddr := os.Getenv("GC_HTTPS_ADDR")
certFile := os.Getenv("GC_TLS_CERT")
keyFile := os.Getenv("GC_TLS_KEY")
dataDir := os.Getenv("GC_DATA_DIR")
if dataDir == "" {
dataDir = "/var/lib/greencoast"
}
staticDir := os.Getenv("GC_STATIC_DIR")
if staticDir == "" {
staticDir = "/opt/greencoast/client"
}
staticAddr := os.Getenv("GC_STATIC_ADDR")
if staticAddr == "" {
staticAddr = ":9082"
}
coarseTS := getenvBool("GC_COARSE_TS", false)
zeroTrust := getenvBool("GC_ZERO_TRUST", true)
signingSecretHex := os.Getenv("GC_SIGNING_SECRET_HEX")
discID := os.Getenv("GC_DISCORD_CLIENT_ID")
discSecret := os.Getenv("GC_DISCORD_CLIENT_SECRET")
discRedirect := os.Getenv("GC_DISCORD_REDIRECT_URI")
store, err := storage.NewFS(dataDir)
if err != nil {
log.Fatalf("storage init: %v", err)
}
ix := index.New()
// Auto-reindex on boot if possible
if w, ok := any(store).(interface {
Walk(func(hash string, size int64, mod time.Time) error) error
}); ok {
if err := w.Walk(func(hash string, size int64, mod time.Time) error {
return ix.Put(index.Entry{
Hash: hash,
Bytes: size,
StoredAt: mod.UTC().Format(time.RFC3339Nano),
Private: false,
})
}); err != nil {
log.Printf("reindex on boot: %v", err)
}
}
ap := api.AuthProviders{
SigningSecretHex: signingSecretHex,
Discord: api.DiscordProvider{
Enabled: discID != "" && discSecret != "" && discRedirect != "",
ClientID: discID,
ClientSecret: discSecret,
RedirectURI: discRedirect,
},
}
srv := api.New(store, ix, coarseTS, zeroTrust, ap)
// Static client server (9082)
go func() {
if st, err := os.Stat(staticDir); err != nil || !st.IsDir() {
log.Printf("WARN: GC_STATIC_DIR %q not found or not a dir; client may 404", staticDir)
}
mux := http.NewServeMux()
// Optional: forward API paths to API host to avoid 404 if user hits wrong host
mux.Handle("/v1/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "https://api-gc.fullmooncyberworks.com"+r.URL.Path, http.StatusTemporaryRedirect)
}))
mux.Handle("/", http.FileServer(http.Dir(staticDir)))
log.Printf("static listening on %s (dir=%s)", staticAddr, staticDir)
if err := http.ListenAndServe(staticAddr, staticHeaders(mux)); err != nil {
log.Fatalf("static server: %v", err)
}
}()
if httpsAddr != "" && certFile != "" && keyFile != "" {
log.Printf("starting HTTPS API on %s", httpsAddr)
if err := srv.ListenHTTPS(httpsAddr, certFile, keyFile); err != nil {
log.Fatal(err)
}
return
}
log.Printf("starting HTTP API on %s", httpAddr)
if err := srv.ListenHTTP(httpAddr); err != nil {
log.Fatal(err)
}
log.Fatal(srv.ListenHTTP(cfg.Listen.HTTP))
}

69
configs/shard.test.yaml Normal file
View File

@@ -0,0 +1,69 @@
shard_id: "gc-test-001"
listen:
http: "0.0.0.0:9080" # API for testers
https: "" # if you terminate TLS at a proxy, leave empty
ws: "0.0.0.0:9081" # reserved
tls:
enable: false # set true only if serving HTTPS directly here
cert_file: "/etc/greencoast/tls/cert.pem"
key_file: "/etc/greencoast/tls/key.pem"
federation:
mtls_enable: false
listen: "0.0.0.0:9443"
cert_file: "/etc/greencoast/fed/cert.pem"
key_file: "/etc/greencoast/fed/key.pem"
client_ca_file: "/etc/greencoast/fed/clients_ca.pem"
ui:
enable: true
path: "./client"
base_url: "/"
frontend_http: "0.0.0.0:9082" # static client for testers
storage:
backend: "fs"
path: "/var/lib/greencoast/objects"
max_object_kb: 128 # lower if you want to constrain uploads
security:
zero_trust: true
require_mtls_for_federation: true
accept_client_signed_tokens: true
log_level: "warn"
privacy:
retain_ip: "no"
retain_user_agent: "no"
retain_timestamps: "coarse"
auth:
# IMPORTANT: rotate this per environment (use `openssl rand -hex 32`)
signing_secret: "D941C4F91D0046D28CDBC3F425DE0B4EA26BD2A80434E0F160D1B7C813EB43F8"
sso:
discord:
enabled: true
client_id: "1408292766319906946"
client_secret: "zJ6GnUUykHbMFbWsPPneNxNK-PtOXYg1"
# must exactly match your Discord app's allowed redirect
redirect_uri: "https://greencoast.fullmooncyberworks.com/auth-callback.html"
google:
enabled: false
client_id: ""
client_secret: ""
redirect_uri: ""
facebook:
enabled: false
client_id: ""
client_secret: ""
redirect_uri: ""
two_factor:
webauthn_enabled: false
totp_enabled: false
limits:
rate:
burst: 20
per_minute: 60 # slightly tighter for external testing

25
docker-compose.test.yml Normal file
View File

@@ -0,0 +1,25 @@
version: "3.9"
services:
shard-test:
build: .
container_name: greencoast-shard-test
restart: unless-stopped
user: "0:0"
# These ports are optional (useful for local debug). Tunnel doesn't need them.
ports:
- "9080:9080" # API
- "9082:9082" # Frontend
environment:
- GC_DEV_ALLOW_UNAUTH=true
volumes:
- ./testdata:/var/lib/greencoast
- ./configs/shard.test.yaml:/app/shard.yaml:ro
- ./client:/app/client:ro
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel --no-autoupdate run --token ${CF_TUNNEL_TOKEN}
restart: unless-stopped
depends_on:
- shard-test

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,7 @@ package api
import (
"log"
"mime"
"net/http"
"os"
"path/filepath"
@@ -9,7 +10,14 @@ import (
"time"
)
// Mount static on the API mux (kept for compatibility; still serves under API port if you want)
func init() {
// Ensure common types are known (some distros are sparse by default)
_ = mime.AddExtensionType(".js", "application/javascript; charset=utf-8")
_ = mime.AddExtensionType(".css", "text/css; charset=utf-8")
_ = mime.AddExtensionType(".html", "text/html; charset=utf-8")
_ = mime.AddExtensionType(".map", "application/json; charset=utf-8")
}
func (s *Server) MountStatic(dir string, baseURL string) {
if dir == "" {
return
@@ -23,7 +31,6 @@ func (s *Server) MountStatic(dir string, baseURL string) {
}
}
// NEW: serve the same static handler on its own port (frontend).
func (s *Server) ListenFrontendHTTP(addr, dir, baseURL string) error {
if dir == "" || addr == "" {
return nil
@@ -48,6 +55,7 @@ func (s *Server) staticHandler(dir, baseURL string) http.Handler {
}
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
s.secureHeaders(w)
up := strings.TrimPrefix(r.URL.Path, baseURL)
if up == "" || strings.HasSuffix(r.URL.Path, "/") {
up = "index.html"
@@ -57,12 +65,19 @@ func (s *Server) staticHandler(dir, baseURL string) http.Handler {
http.NotFound(w, r)
return
}
// Serve file if it exists, else SPA-fallback to index.html
if st, err := os.Stat(full); err == nil && !st.IsDir() {
// Set Content-Type explicitly based on extension
if ctype := mime.TypeByExtension(filepath.Ext(full)); ctype != "" {
w.Header().Set("Content-Type", ctype)
}
http.ServeFile(w, r, full)
return
}
fallback := filepath.Join(dir, "index.html")
if _, err := os.Stat(fallback); err == nil {
w.Header().Set("Content-Type", "text/html; charset=utf-8")
http.ServeFile(w, r, fallback)
return
}

78
internal/auth/gc2.go Normal file
View File

@@ -0,0 +1,78 @@
package auth
import (
"crypto/hmac"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"encoding/json"
"errors"
"strings"
"time"
)
type Claims struct {
Sub string `json:"sub"` // account ID (acc_…)
Exp int64 `json:"exp"` // unix seconds
Nbf int64 `json:"nbf,omitempty"` // not before
Iss string `json:"iss,omitempty"` // greencoast
Aud string `json:"aud,omitempty"` // api
Jti string `json:"jti,omitempty"` // token id (optional)
CNF string `json:"cnf,omitempty"` // key binding: "p256:<b64raw>" or "ed25519:<b64raw>"
}
func MintGC2(signKey []byte, c Claims) (string, error) {
if len(signKey) == 0 {
return "", errors.New("sign key missing")
}
if c.Sub == "" || c.Exp == 0 {
return "", errors.New("claims incomplete")
}
body, _ := json.Marshal(c)
mac := hmac.New(sha256.New, signKey)
mac.Write(body)
sig := mac.Sum(nil)
return "gc2." + base64.RawURLEncoding.EncodeToString(body) + "." + base64.RawURLEncoding.EncodeToString(sig), nil
}
func VerifyGC2(signKey []byte, tok string, now time.Time) (Claims, error) {
var zero Claims
if !strings.HasPrefix(tok, "gc2.") {
return zero, errors.New("bad prefix")
}
parts := strings.Split(tok, ".")
if len(parts) != 3 {
return zero, errors.New("bad parts")
}
body, err := base64.RawURLEncoding.DecodeString(parts[1])
if err != nil {
return zero, err
}
want, err := base64.RawURLEncoding.DecodeString(parts[2])
if err != nil {
return zero, err
}
mac := hmac.New(sha256.New, signKey)
mac.Write(body)
if !hmac.Equal(want, mac.Sum(nil)) {
return zero, errors.New("bad sig")
}
var c Claims
if err := json.Unmarshal(body, &c); err != nil {
return zero, err
}
t := now.Unix()
if c.Nbf != 0 && t < c.Nbf {
return zero, errors.New("nbf")
}
if t > c.Exp {
return zero, errors.New("expired")
}
return c, nil
}
func AccountIDFromPub(raw []byte) string {
// acc_<first32 hex of sha256(pub)>
sum := sha256.Sum256(raw)
return "acc_" + hex.EncodeToString(sum[:16])
}

View File

@@ -1,123 +1,88 @@
package index
import (
"bufio"
"encoding/json"
"os"
"path/filepath"
"sort"
"sync"
"time"
)
type opType string
const (
OpPut opType = "put"
OpDel opType = "del"
)
type record struct {
Op opType `json:"op"`
Hash string `json:"hash"`
Bytes int64 `json:"bytes,omitempty"`
StoredAt time.Time `json:"stored_at,omitempty"`
Private bool `json:"private,omitempty"`
}
type Entry struct {
Hash string `json:"hash"`
Bytes int64 `json:"bytes"`
StoredAt time.Time `json:"stored_at"`
StoredAt string `json:"stored_at"`
Private bool `json:"private"`
CreatorTZ string `json:"creator_tz,omitempty"`
}
type rec struct {
Hash string
Bytes int64
StoredAt time.Time
Private bool
CreatorTZ string
}
type Index struct {
path string
mu sync.Mutex
mu sync.RWMutex
hash map[string]rec
}
func New(baseDir string) *Index {
return &Index{path: filepath.Join(baseDir, "index.jsonl")}
}
func New() *Index { return &Index{hash: make(map[string]rec)} }
func (i *Index) AppendPut(e Entry) error {
i.mu.Lock()
defer i.mu.Unlock()
return appendRec(i.path, record{
Op: OpPut,
func (ix *Index) Put(e Entry) error {
ix.mu.Lock()
defer ix.mu.Unlock()
t := parseWhen(e.StoredAt)
if t.IsZero() {
t = time.Now().UTC()
}
ix.hash[e.Hash] = rec{
Hash: e.Hash,
Bytes: e.Bytes,
StoredAt: e.StoredAt,
StoredAt: t,
Private: e.Private,
})
CreatorTZ: e.CreatorTZ,
}
return nil
}
func (i *Index) AppendDelete(hash string) error {
i.mu.Lock()
defer i.mu.Unlock()
return appendRec(i.path, record{Op: OpDel, Hash: hash})
func (ix *Index) Delete(hash string) error {
ix.mu.Lock()
defer ix.mu.Unlock()
delete(ix.hash, hash)
return nil
}
func appendRec(path string, r record) error {
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
return err
func (ix *Index) List() ([]Entry, error) {
ix.mu.RLock()
defer ix.mu.RUnlock()
tmp := make([]rec, 0, len(ix.hash))
for _, r := range ix.hash {
tmp = append(tmp, r)
}
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0o644)
if err != nil {
return err
}
defer f.Close()
enc := json.NewEncoder(f)
return enc.Encode(r)
}
func (i *Index) Snapshot() ([]Entry, error) {
i.mu.Lock()
defer i.mu.Unlock()
f, err := os.Open(i.path)
if os.IsNotExist(err) {
return nil, nil
}
if err != nil {
return nil, err
}
defer f.Close()
sc := bufio.NewScanner(f)
sc.Buffer(make([]byte, 0, 64*1024), 4*1024*1024)
type state struct {
Entry Entry
Deleted bool
}
m := make(map[string]state)
for sc.Scan() {
var rec record
if err := json.Unmarshal(sc.Bytes(), &rec); err != nil {
continue
}
switch rec.Op {
case OpPut:
m[rec.Hash] = state{Entry: Entry{
Hash: rec.Hash, Bytes: rec.Bytes, StoredAt: rec.StoredAt, Private: rec.Private,
}}
case OpDel:
s := m[rec.Hash]
s.Deleted = true
m[rec.Hash] = s
sort.Slice(tmp, func(i, j int) bool { return tmp[i].StoredAt.After(tmp[j].StoredAt) })
out := make([]Entry, len(tmp))
for i, r := range tmp {
out[i] = Entry{
Hash: r.Hash,
Bytes: r.Bytes,
StoredAt: r.StoredAt.UTC().Format(time.RFC3339Nano),
Private: r.Private,
CreatorTZ: r.CreatorTZ,
}
}
if err := sc.Err(); err != nil {
return nil, err
}
var out []Entry
for _, s := range m {
if !s.Deleted && s.Entry.Hash != "" {
out = append(out, s.Entry)
}
}
sort.Slice(out, func(i, j int) bool { return out[i].StoredAt.After(out[j].StoredAt) })
return out, nil
}
func parseWhen(s string) time.Time {
if s == "" {
return time.Time{}
}
if t, err := time.Parse(time.RFC3339Nano, s); err == nil {
return t
}
if t, err := time.Parse(time.RFC3339, s); err == nil {
return t
}
return time.Time{}
}

240
internal/storage/fs.go Normal file
View File

@@ -0,0 +1,240 @@
package storage
import (
"errors"
"io"
"io/fs"
"os"
"path/filepath"
"strings"
"time"
)
type FSStore struct {
root string
objects string
}
func NewFS(dir string) (*FSStore, error) {
if dir == "" {
return nil, errors.New("empty storage dir")
}
o := filepath.Join(dir, "objects")
if err := os.MkdirAll(o, 0o755); err != nil {
return nil, err
}
return &FSStore{root: dir, objects: o}, nil
}
func (s *FSStore) pathFlat(hash string) (string, error) {
if hash == "" {
return "", errors.New("empty hash")
}
return filepath.Join(s.objects, hash), nil
}
func isHexHash(name string) bool {
if len(name) != 64 {
return false
}
for i := 0; i < 64; i++ {
c := name[i]
if !((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f')) {
return false
}
}
return true
}
func (s *FSStore) findBlobPath(hash string) (string, error) {
if hash == "" {
return "", errors.New("empty hash")
}
// 1) flat
if p, _ := s.pathFlat(hash); fileExists(p) {
return p, nil
}
// 2) objects/<hash>/{blob,data,content}
dir := filepath.Join(s.objects, hash)
for _, cand := range []string{"blob", "data", "content"} {
p := filepath.Join(dir, cand)
if fileExists(p) {
return p, nil
}
}
// 3) objects/<hash>/<single file>
if st, err := os.Stat(dir); err == nil && st.IsDir() {
ents, _ := os.ReadDir(dir)
var picked string
var pickedMod time.Time
for _, de := range ents {
if de.IsDir() {
continue
}
p := filepath.Join(dir, de.Name())
fi, err := os.Stat(p)
if err == nil && fi.Mode().IsRegular() {
if picked == "" || fi.ModTime().After(pickedMod) {
picked, pickedMod = p, fi.ModTime()
}
}
}
if picked != "" {
return picked, nil
}
}
// 4) two-level prefix objects/aa/<hash>
if len(hash) >= 2 {
p := filepath.Join(s.objects, hash[:2], hash)
if fileExists(p) {
return p, nil
}
}
// 5) recursive search
var best string
var bestMod time.Time
_ = filepath.WalkDir(s.objects, func(p string, d fs.DirEntry, err error) error {
if err != nil || d.IsDir() {
return nil
}
base := filepath.Base(p)
if base == hash {
best = p
return fs.SkipDir
}
parent := filepath.Base(filepath.Dir(p))
if parent == hash {
if fi, err := os.Stat(p); err == nil && fi.Mode().IsRegular() {
if best == "" || fi.ModTime().After(bestMod) {
best, bestMod = p, fi.ModTime()
}
}
}
return nil
})
if best != "" {
return best, nil
}
return "", os.ErrNotExist
}
func fileExists(p string) bool {
fi, err := os.Stat(p)
return err == nil && fi.Mode().IsRegular()
}
func (s *FSStore) Put(hash string, r io.Reader) error {
p, err := s.pathFlat(hash)
if err != nil {
return err
}
if err := os.MkdirAll(filepath.Dir(p), 0o755); err != nil {
return err
}
tmp := p + ".tmp"
f, err := os.Create(tmp)
if err != nil {
return err
}
_, werr := io.Copy(f, r)
cerr := f.Close()
if werr != nil {
_ = os.Remove(tmp)
return werr
}
if cerr != nil {
_ = os.Remove(tmp)
return cerr
}
return os.Rename(tmp, p)
}
func (s *FSStore) Get(hash string) (io.ReadCloser, int64, error) {
p, err := s.findBlobPath(hash)
if err != nil {
return nil, 0, err
}
f, err := os.Open(p)
if err != nil {
return nil, 0, err
}
st, err := f.Stat()
if err != nil {
return f, 0, nil
}
return f, st.Size(), nil
}
func (s *FSStore) Delete(hash string) error {
if p, _ := s.pathFlat(hash); fileExists(p) {
if err := os.Remove(p); err == nil || errors.Is(err, os.ErrNotExist) {
return nil
}
}
dir := filepath.Join(s.objects, hash)
for _, cand := range []string{"blob", "data", "content"} {
p := filepath.Join(dir, cand)
if fileExists(p) {
if err := os.Remove(p); err == nil || errors.Is(err, os.ErrNotExist) {
return nil
}
}
}
if len(hash) >= 2 {
p := filepath.Join(s.objects, hash[:2], hash)
if fileExists(p) {
if err := os.Remove(p); err == nil || errors.Is(err, os.ErrNotExist) {
return nil
}
}
}
if p, err := s.findBlobPath(hash); err == nil {
if err := os.Remove(p); err == nil || errors.Is(err, os.ErrNotExist) {
return nil
}
}
return nil
}
func (s *FSStore) Walk(fn func(hash string, size int64, mod time.Time) error) error {
type rec struct {
size int64
mod time.Time
}
agg := make(map[string]rec)
_ = filepath.WalkDir(s.objects, func(p string, d fs.DirEntry, err error) error {
if err != nil || d.IsDir() {
return nil
}
fi, err := os.Stat(p)
if err != nil || !fi.Mode().IsRegular() {
return nil
}
base := filepath.Base(p)
if isHexHash(base) {
if r, ok := agg[base]; !ok || fi.ModTime().After(r.mod) {
agg[base] = rec{fi.Size(), fi.ModTime()}
}
return nil
}
parent := filepath.Base(filepath.Dir(p))
if isHexHash(parent) {
if r, ok := agg[parent]; !ok || fi.ModTime().After(r.mod) {
agg[parent] = rec{fi.Size(), fi.ModTime()}
}
return nil
}
if len(base) == 64 && isHexHash(strings.ToLower(base)) {
if r, ok := agg[base]; !ok || fi.ModTime().After(r.mod) {
agg[base] = rec{fi.Size(), fi.ModTime()}
}
}
return nil
})
for h, r := range agg {
if err := fn(h, r.size, r.mod); err != nil {
return err
}
}
return nil
}

View File

@@ -1,95 +0,0 @@
package storage
import (
"crypto/sha256"
"encoding/hex"
"errors"
"io"
"os"
"path/filepath"
)
type FSStore struct {
root string
maxObjectB int64
}
func NewFSStore(root string, maxKB int) (*FSStore, error) {
if root == "" {
root = "./data/objects"
}
if err := os.MkdirAll(root, 0o755); err != nil {
return nil, err
}
return &FSStore{root: root, maxObjectB: int64(maxKB) * 1024}, nil
}
func (s *FSStore) Put(r io.Reader) (string, int64, error) {
h := sha256.New()
tmp := filepath.Join(s.root, ".tmp")
_ = os.MkdirAll(tmp, 0o755)
f, err := os.CreateTemp(tmp, "obj-*")
if err != nil {
return "", 0, err
}
defer f.Close()
var n int64
buf := make([]byte, 32*1024)
for {
m, er := r.Read(buf)
if m > 0 {
n += int64(m)
if s.maxObjectB > 0 && n > s.maxObjectB {
return "", 0, errors.New("object too large")
}
_, _ = h.Write(buf[:m])
if _, werr := f.Write(buf[:m]); werr != nil {
return "", 0, werr
}
}
if er == io.EOF {
break
}
if er != nil {
return "", 0, er
}
}
sum := hex.EncodeToString(h.Sum(nil))
dst := filepath.Join(s.root, sum[:2], sum[2:4], sum)
if err := os.MkdirAll(filepath.Dir(dst), 0o755); err != nil {
return "", 0, err
}
if err := os.Rename(f.Name(), dst); err != nil {
return "", 0, err
}
return sum, n, nil
}
func (s *FSStore) pathFor(hash string) string {
return filepath.Join(s.root, hash[:2], hash[2:4], hash)
}
func (s *FSStore) Get(hash string) (string, error) {
if len(hash) < 4 {
return "", os.ErrNotExist
}
p := s.pathFor(hash)
if _, err := os.Stat(p); err != nil {
return "", err
}
return p, nil
}
func (s *FSStore) Delete(hash string) error {
if len(hash) < 4 {
return os.ErrNotExist
}
p := s.pathFor(hash)
if err := os.Remove(p); err != nil {
return err
}
_ = os.Remove(filepath.Dir(p))
_ = os.Remove(filepath.Dir(filepath.Dir(p)))
return nil
}

4
testdata/index.jsonl vendored Normal file
View File

@@ -0,0 +1,4 @@
{"op":"put","hash":"a008a13ade86edbd77f5c0fcfcf35bd295c93069be42fdbd46bc65b392ddf5fb","bytes":110,"stored_at":"2025-08-22T03:00:00Z"}
{"op":"put","hash":"9628e2adcd7a5e820fbdbe075027ac0ad78ef1a7a501971c2048bc5e5436b891","bytes":105,"stored_at":"2025-08-22T03:00:00Z","private":true}
{"op":"put","hash":"6a166437b9988bd11e911375f3ca1b4cd10b7db9a32812409c6d79a0753dd973","bytes":98,"stored_at":"2025-08-22T03:00:00Z"}
{"op":"put","hash":"f452402fadb6608bd6f9b613a1d58234e2135f045ea29262574e3e4b1e5f7292","bytes":46,"stored_at":"2025-08-22T03:00:00Z"}

View File

@@ -0,0 +1 @@
{"title":"Timezone Publish","body":"You can now include your timezone on all of your posts. This is completely optional but lets others see when you posted"}

View File

@@ -0,0 +1 @@
{"title":"Yarn is Testing!","body":"Hello, my name is Yarn. And I like to test. Test test 1 2 3."}

View File

@@ -0,0 +1 @@
<01><><EFBFBD>d<EFBFBD>+V<><56><EFBFBD>+<2B>%<>O<EFBFBD><4F>2ޒ$)<07><>zF<7A>î<EFBFBD>)4<><34><EFBFBD>O:z<><7A>*<2A>Ыe<D0AB><65>*5<><04>)<29><>#<23>V<EFBFBD> <0B>H<EFBFBD><48>!i<><69><EFBFBD>S$e<><65><EFBFBD>dx<64>]<5D><>$<24><1F>t<EFBFBD><74>6۩<><DBA9>H<EFBFBD><48>

View File

@@ -0,0 +1 @@
{"title":"Public Test","body":"Hello Everyone,\n\nWelcome to GreenCoast, a BlueSky Replacement\n\nMystiatech"}

View File

@@ -0,0 +1 @@
{"title":"Test post","body":"Does this work?"}