From 1abcdb9282d4c7ba4b4d77276e9cacaa98ed7cf9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=B2=9B=E7=9F=B3?= <54824693+bruce2233@users.noreply.github.com> Date: Sat, 31 Jan 2026 14:43:37 +0800 Subject: [PATCH] Revert "WAP: generate paper page" --- index.html | 19 - .../grad-en.html | 88 ---- .../grad-zh.html | 88 ---- .../https-arxiv-org-abs-2511-13719/hs-en.html | 67 --- .../https-arxiv-org-abs-2511-13719/hs-zh.html | 67 --- .../https-arxiv-org-abs-2511-13719/index.html | 97 ---- .../https-arxiv-org-abs-2511-13719/script.js | 77 --- .../https-arxiv-org-abs-2511-13719/styles.css | 457 ------------------ 8 files changed, 960 deletions(-) delete mode 100644 papers/https-arxiv-org-abs-2511-13719/grad-en.html delete mode 100644 papers/https-arxiv-org-abs-2511-13719/grad-zh.html delete mode 100644 papers/https-arxiv-org-abs-2511-13719/hs-en.html delete mode 100644 papers/https-arxiv-org-abs-2511-13719/hs-zh.html delete mode 100644 papers/https-arxiv-org-abs-2511-13719/index.html delete mode 100644 papers/https-arxiv-org-abs-2511-13719/script.js delete mode 100644 papers/https-arxiv-org-abs-2511-13719/styles.css diff --git a/index.html b/index.html index 56e7fc2..a5082c9 100644 --- a/index.html +++ b/index.html @@ -103,25 +103,6 @@

快速入口

- -

- Doc-Researcher - Doc-Researcher -

-
- AI expert that reads charts, tables, and layouts like a human for complex documents. - 像人类专家一样阅读图表、表格与布局,解决复杂文档研究任务。 -
-
- Deep multimodal parsing + hybrid retrieval paradigms + iterative multi-agent workflows; 50.6% on M4DocBench. - 深度多模态解析 + 混合检索范式 + 迭代多代理流;在 M4DocBench 取得 50.6% 准确率。 -
-
- arXiv 2510.21603 - Multimodal - Agents -
-

Attention Is All You Need 注意力机制即你所需 diff --git a/papers/https-arxiv-org-abs-2511-13719/grad-en.html b/papers/https-arxiv-org-abs-2511-13719/grad-en.html deleted file mode 100644 index 2364295..0000000 --- a/papers/https-arxiv-org-abs-2511-13719/grad-en.html +++ /dev/null @@ -1,88 +0,0 @@ - - - - - - Doc-Researcher (Grad-EN) | WAP - - - - - -
-
-
GRADUATE EDITION / ENGLISH
-

Doc-Researcher: Overcoming the Multimodal Processing Bottleneck

-

A technical deep-dive into deep multimodal parsing, adaptive retrieval, and agentic evidence synthesis.

-
- - - -
-

Motivation & Problem Statement

-

Current "Deep Research" systems (based on LLMs) are largely restricted to text-based web scraping. In professional and scientific domains, knowledge is dense in highly structured multimodal documents (PDFs/Scans). Standard RAG (Retrieval-Augmented Generation) pipelines fail here because they often "flatten" the structure, losing vital visual semantics like the relationship between a chart's axes or the hierarchical context of a table.

-
- -
-

I. Deep Multimodal Parsing

-

Doc-Researcher employs a parsing engine that preserves multimodal integrity. It creates multi-granular representations:

-
    -
  • Chunk-level: Captures local context including equations and inline symbols.
  • -
  • Block-level: Respects logical visual boundaries (e.g., a specific figure with its caption).
  • -
  • Document-level: Maintains layout hierarchy and global semantics.
  • -
-
Key Innovation: The system maps visual elements to text descriptions while keeping the original pixel features for vision-centric retrieval.
-
- -
-

II. Systematic Hybrid Retrieval

-

The system utilizes an architecture that supports three paradigms:

-
    -
  1. Text-only: Standard semantic search on text chunks.
  2. -
  3. Vision-only: Directly retrieving document segments based on visual similarity.
  4. -
  5. Hybrid: Combining text and vision signals with dynamic granularity selection—choosing between fine-grained chunks or broader document context based on query ambiguity.
  6. -
-
- -
-

III. Iterative Multi-Agent Workflows

-

Unlike single-pass retrieval, Doc-Researcher uses an agentic loop:

-
    -
  • Planner: Decomposes complex, multi-hop queries into sub-tasks.
  • -
  • Searcher: Executes the hybrid retrieval to find candidates.
  • -
  • Refiner: Evaluates retrieved evidence and decides if more searching is needed (iterative accumulation).
  • -
  • Synthesizer: Integrates multimodal evidence to form a final, cited answer.
  • -
-
- -
-

M4DocBench & Evaluation

-

To evaluate these capabilities, the authors introduced M4DocBench (Multi-modal, Multi-hop, Multi-document, and Multi-turn). It consists of 158 expert-level questions spanning 304 documents. This benchmark requires the model to "connect the dots" across multiple files and modalities.

-
- -
-

Experimental Outcomes

-
-
- Direct Comparison -
50.6% accuracy vs. ~15% for state-of-the-art baselines (3.4x improvement).
-
-
- Ablation -
Removing the "Visual Semantics" component caused the largest performance drop, proving layout matters.
-
-
-
- -
WAP - Academic rigor for deep documents.
-
- - - diff --git a/papers/https-arxiv-org-abs-2511-13719/grad-zh.html b/papers/https-arxiv-org-abs-2511-13719/grad-zh.html deleted file mode 100644 index 7705525..0000000 --- a/papers/https-arxiv-org-abs-2511-13719/grad-zh.html +++ /dev/null @@ -1,88 +0,0 @@ - - - - - - Doc-Researcher (研究生版) | WAP - - - - - -
-
-
学术 / 研究生版
-

Doc-Researcher:破解复杂文档多模态处理的瓶颈

-

技术深潜:深度多模态解析、自适应检索与代理式证据合成。

-
- - - -
-

研究动机与问题定义

-

当前的“深度研究 (Deep Research)”系统(如基于 LLM 的系统)主要局限于文本类 Web 数据。在专业领域,核心知识往往以高度结构化的多模态文档(PDF/扫描件)形式存在。传统的 RAG(检索增强生成)流程在这种场景下通常会失效,因为它们将文档“扁平化”,丢失了图表轴线、视觉层次或表格嵌套关系等关键视觉语义。

-
- -
-

一、深度多模态解析引擎

-

Doc-Researcher 采用了一种能够保持多模态完整性的解析引擎。它建立了多层级的表示体系:

-
    -
  • 块级 (Chunk-level): 捕捉局部上下文,包括行内公式和数学符号。
  • -
  • 模块级 (Block-level): 遵循逻辑视觉边界(例如带有标题的特定图表)。
  • -
  • 文档级 (Document-level): 维护全局的排版结构与语义。
  • -
-
核心创新:该系统将视觉元素映射到文本描述,同时保留原始像素特征,用于视觉中心路径的检索。
-
- -
-

二、系统化的混合检索架构

-

Doc-Researcher 支持三种检索范式:

-
    -
  1. 纯文本检索 (Text-only): 对文本块执行标准语义搜索。
  2. -
  3. 纯视觉检索 (Vision-only): 基于视觉相似度直接检索文档区域。
  4. -
  5. 混合检索 (Hybrid): 结合文本与视觉信号,并具备动态粒度选择能力——根据查询的模糊性在细粒度块或宏观文档上下文中自动切换。
  6. -
-
- -
-

三、迭代多智能体工作流

-

不同于单次检索,Doc-Researcher 引入了代理循环:

-
    -
  • 规划者 (Planner): 将复杂的多跳查询拆分为子任务。
  • -
  • 搜寻者 (Searcher): 执行混合检索寻找候选证据。
  • -
  • 精炼者 (Refiner): 评估检索证据,决定是否需要继续搜索(迭代式累计)。
  • -
  • 合成者 (Synthesizer): 整合多模态证据,生成带有引用的最终答案。
  • -
-
- -
-

M4DocBench 高难度评测

-

为了全面评估上述能力,作者提出了 M4DocBench(多模态、多跳、多文档、多轮对话)。它包含由专家标注的 158 个高难度问题,涉及 304 份复杂文档。该基准要求模型能够跨文件、跨模态“连接线索”。

-
- -
-

实验表现

-
-
- 直接对比 -
Doc-Researcher 准确率达到 50.6%,比目前最先进的基准系统(~15%)高出 3.4 倍。
-
-
- 消融实验 -
移除“视觉语义”组件导致性能跌幅最大,证明了布局信息在文档理解中的核心地位。
-
-
-
- -
WAP - 为深度文档研究提供严谨洞察。
-
- - - diff --git a/papers/https-arxiv-org-abs-2511-13719/hs-en.html b/papers/https-arxiv-org-abs-2511-13719/hs-en.html deleted file mode 100644 index eba2b0f..0000000 --- a/papers/https-arxiv-org-abs-2511-13719/hs-en.html +++ /dev/null @@ -1,67 +0,0 @@ - - - - - - Doc-Researcher (HS-EN) | WAP - - - - - -
-
-
HIGH SCHOOL EDITION / ENGLISH
-

How AI Reads Complex Documents: Doc-Researcher

-

Most AI systems only "read" text. Doc-Researcher is a new system that actually understands charts, tables, and layouts like a human expert does.

-
- - - -
-

The "Wall" for Traditional AI

-

Imagine asking an AI to analyze a 50-page financial report or a scientific paper. Most current AIs can grab the text, but they get confused by complex layout diagrams, math equations, or data hidden in tables. They treat everything like a flat block of words, missing the "visual language" of the document.

-
The Gap: AI has been "blind" to the visual structure and multimodal data (images + text) inside documents.
-
- -
-

The Doc-Researcher Solution

-

The researchers created a three-step brain for the AI:

-
-
- 1. Smart Parsing -
It doesn't just copy text; it sees where every chart and table is, preserving its meaning.
-
-
- 2. Hybrid Search -
It can look for things by text descriptions or by visual looks, picking the best way to find evidence.
-
-
- 3. Teamwork Agents -
Instead of one try, it uses several "AI agents" that brainstorm, look for more clues, and combine them into a final answer.
-
-
-
- -
-

Real-World Results

-

The team created a new test called M4DocBench. It has 158 very hard questions that requires "jumping" between different documents and looking at pictures to find the answer.

-
Doc-Researcher got 50.6% accuracy, which is 3.4 times better than previous top-tier AI systems!
-
- -
-

Curious about the math and logic?

-

If you want to see the specific technical architecture and deep data science behind this, check out the Graduate version.

- View Graduate Version (EN) -
- -
WAP - Simplified paper insights.
-
- - - diff --git a/papers/https-arxiv-org-abs-2511-13719/hs-zh.html b/papers/https-arxiv-org-abs-2511-13719/hs-zh.html deleted file mode 100644 index 584d87d..0000000 --- a/papers/https-arxiv-org-abs-2511-13719/hs-zh.html +++ /dev/null @@ -1,67 +0,0 @@ - - - - - - Doc-Researcher (高中版) | WAP - - - - - -
-
-
科普 / 高中版
-

AI 如何阅读复杂的文档:Doc-Researcher 详解

-

大多数 AI 系统只能“读文字”。Doc-Researcher 却像人类专家一样,能够读懂图表、表格和文档布局。

-
- - - -
-

传统 AI 的“盲区”

-

想象一下,让你分析一份 50 页的财务报告或一篇科学论文。大多数 AI 只能提取其中的文本,但当遇到复杂的结构图、数学公式或隐藏在表格中的数据时,它们就会感到困惑。由于丢失了图片和布局信息,AI 无法从真正专业的文档中获取深层知识。

-
关键缺失:AI 以前由于无法“看懂”图片的视觉结构,导致在处理复杂文档时存在巨大盲区。
-
- -
-

Doc-Researcher 的解决方案

-

研究人员为 AI 打造了三个关键组件:

-
-
- 1. 深度多模态解析 -
它不只是复制文字,而是会识别每个图表和表格的位置,保存它们的视觉含义。
-
-
- 2. 混合式搜索 -
它既可以通过文字描述来搜索,也可以通过视觉特征来寻找证据,从而选择最佳路径。
-
-
- 3. 迭代协作流 -
它使用多个“AI 智能体”进行团队协作:有的负责拆解问题,有的负责寻找证据,最后合并成完整答案。
-
-
-
- -
-

真实表现如何?

-

团队创建了一个名为 M4DocBench 的新测试,包含 158 个非常困难的问题,这些问题需要 AI 在多个文档之间“跳转”并查看图片才能回答。

-
Doc-Researcher 的准确率达到了 50.6%,比之前最先进的 AI 系统提高了 3.4 倍!
-
- -
-

想要了解更深层的逻辑?

-

如果你想了解这背后的具体架构和深层数据科学,请查看研究生版本。

- 查看研究生版本 (中文) -
- -
WAP - 让科学论文通俗易懂。
-
- - - diff --git a/papers/https-arxiv-org-abs-2511-13719/index.html b/papers/https-arxiv-org-abs-2511-13719/index.html deleted file mode 100644 index 34e12a3..0000000 --- a/papers/https-arxiv-org-abs-2511-13719/index.html +++ /dev/null @@ -1,97 +0,0 @@ - - - - - - Doc-Researcher | WAP - - - - - - - - - -
-
-
-
WAP PAPER HUB
-

Doc-Researcher: A Unified System for Multimodal Document Parsing and Deep Research

-

A groundbreaking system that solves complex research queries by deeply parsing multimodal documents (figures, tables, charts) and using iterative agent workflows.

- -
-
-
Paper Snapshot
-
-
- Authors -
Dong Kuicai, Huang Shurui, Ye Fangda, Han Wei, Zhang Zhi, and colleagues.
-
-
- Date -
October 2025
-
-
- Core Tech -
Multimodal Parsing, Hybrid Retrieval, Multi-Agent Workflow
-
-
- Benchmark -
M4DocBench (158 expert-annotated questions)
-
-
-
-
- -
-

Choose your depth

- -
- -
-

Resources

-
-
- arXiv Abstract - 2510.21603 -
-
- Paper PDF - Download PDF -
-
-
- -
WAP - bridging the gap in document research.
-
- - - - diff --git a/papers/https-arxiv-org-abs-2511-13719/script.js b/papers/https-arxiv-org-abs-2511-13719/script.js deleted file mode 100644 index 7a4f9b2..0000000 --- a/papers/https-arxiv-org-abs-2511-13719/script.js +++ /dev/null @@ -1,77 +0,0 @@ -const copyButtons = document.querySelectorAll("[data-copy], [data-copy-target]"); - -copyButtons.forEach((button) => { - button.addEventListener("click", async () => { - const directText = button.getAttribute("data-copy"); - const targetId = button.getAttribute("data-copy-target"); - let text = directText; - - if (!text && targetId) { - const target = document.getElementById(targetId); - if (target) { - text = target.textContent.trim(); - } - } - - if (!text) return; - - try { - await navigator.clipboard.writeText(text); - const original = button.textContent; - button.textContent = "Copied"; - setTimeout(() => { - button.textContent = original; - }, 1400); - } catch (err) { - button.textContent = "Copy failed"; - } - }); -}); - -const prefersReducedMotion = window.matchMedia("(prefers-reduced-motion: reduce)").matches; -const revealElements = document.querySelectorAll(".reveal"); - -if (prefersReducedMotion) { - revealElements.forEach((el) => el.classList.add("is-visible")); -} else { - const revealObserver = new IntersectionObserver( - (entries) => { - entries.forEach((entry) => { - if (entry.isIntersecting) { - entry.target.classList.add("is-visible"); - } - }); - }, - { threshold: 0.18 } - ); - - revealElements.forEach((el) => revealObserver.observe(el)); -} - -const navLinks = Array.from(document.querySelectorAll(".section-nav a")); -const sections = Array.from(document.querySelectorAll("section[data-section]")); -const linkById = new Map(); - -navLinks.forEach((link) => { - const href = link.getAttribute("href") || ""; - if (href.startsWith("#")) { - linkById.set(href.slice(1), link); - } -}); - -if (navLinks.length && sections.length) { - const navObserver = new IntersectionObserver( - (entries) => { - entries.forEach((entry) => { - if (!entry.isIntersecting) return; - const active = linkById.get(entry.target.id); - if (!active) return; - navLinks.forEach((link) => link.classList.remove("active")); - active.classList.add("active"); - }); - }, - { threshold: 0.4, rootMargin: "0px 0px -40% 0px" } - ); - - sections.forEach((section) => navObserver.observe(section)); -} diff --git a/papers/https-arxiv-org-abs-2511-13719/styles.css b/papers/https-arxiv-org-abs-2511-13719/styles.css deleted file mode 100644 index 6583e11..0000000 --- a/papers/https-arxiv-org-abs-2511-13719/styles.css +++ /dev/null @@ -1,457 +0,0 @@ -:root { - color-scheme: light; - --bg: #f7f1e7; - --bg-2: #edf4ff; - --bg-3: #f4f7f0; - --ink: #101523; - --muted: #4b5563; - --line: rgba(15, 23, 42, 0.12); - --accent: #ff6a3d; - --accent-2: #1c7cff; - --accent-3: #14b8a6; - --card: #ffffff; - --shadow-lg: 0 30px 70px rgba(15, 23, 42, 0.14); - --shadow-md: 0 16px 40px rgba(15, 23, 42, 0.12); - --shadow-sm: 0 10px 28px rgba(15, 23, 42, 0.08); - --radius-xl: 26px; - --radius-lg: 20px; - --radius-md: 14px; - --radius-sm: 10px; -} - -* { - box-sizing: border-box; -} - -body { - margin: 0; - font-family: "Space Grotesk", "Noto Sans SC", system-ui, sans-serif; - color: var(--ink); - background: linear-gradient(120deg, var(--bg) 0%, var(--bg-2) 45%, var(--bg-3) 100%); - min-height: 100vh; -} - -body::before { - content: ""; - position: fixed; - inset: 0; - background: - radial-gradient(circle at 10% 10%, rgba(255, 106, 61, 0.12), transparent 40%), - radial-gradient(circle at 90% 20%, rgba(28, 124, 255, 0.12), transparent 45%), - radial-gradient(circle at 20% 80%, rgba(20, 184, 166, 0.12), transparent 40%), - repeating-linear-gradient(90deg, rgba(15, 23, 42, 0.05) 0 1px, transparent 1px 26px), - repeating-linear-gradient(0deg, rgba(15, 23, 42, 0.04) 0 1px, transparent 1px 26px); - opacity: 0.7; - pointer-events: none; - z-index: 0; -} - -.backdrop { - position: fixed; - inset: 0; - pointer-events: none; - z-index: 1; - overflow: hidden; -} - -.backdrop .flare { - position: absolute; - border-radius: 999px; - opacity: 0.7; - filter: blur(0px); -} - -.backdrop .flare.one { - width: 420px; - height: 420px; - background: radial-gradient(circle, rgba(255, 106, 61, 0.45), rgba(255, 106, 61, 0)); - top: -140px; - right: -120px; -} - -.backdrop .flare.two { - width: 360px; - height: 360px; - background: radial-gradient(circle, rgba(28, 124, 255, 0.35), rgba(28, 124, 255, 0)); - bottom: -140px; - left: -80px; -} - -.backdrop .flare.three { - width: 240px; - height: 240px; - background: radial-gradient(circle, rgba(20, 184, 166, 0.3), rgba(20, 184, 166, 0)); - top: 45%; - left: 55%; -} - -.page { - position: relative; - z-index: 2; - max-width: 1160px; - margin: 0 auto; - padding: 48px 24px 96px; -} - -.hero { - display: grid; - grid-template-columns: minmax(0, 1.15fr) minmax(0, 0.85fr); - gap: 28px; - align-items: stretch; - margin-bottom: 28px; -} - -.eyebrow { - font-size: 12px; - letter-spacing: 0.28em; - text-transform: uppercase; - color: var(--accent-2); - font-weight: 700; -} - -.hero h1 { - margin: 14px 0 10px; - font-size: clamp(32px, 5vw, 56px); - line-height: 1.05; -} - -.subtitle { - margin: 0; - color: var(--muted); - line-height: 1.7; - font-size: 16px; -} - -.hero-actions { - display: flex; - flex-wrap: wrap; - gap: 10px; - margin-top: 18px; -} - -.btn { - text-decoration: none; - padding: 10px 16px; - border-radius: 999px; - border: 1px solid rgba(16, 21, 35, 0.12); - font-weight: 600; - font-size: 13px; - color: var(--ink); - background: rgba(255, 255, 255, 0.7); - transition: transform 0.2s ease, box-shadow 0.2s ease; -} - -.btn.primary { - background: var(--ink); - color: #ffffff; - border-color: transparent; -} - -.btn:hover { - transform: translateY(-2px); - box-shadow: var(--shadow-sm); -} - -.hero-panel { - background: var(--card); - border-radius: var(--radius-xl); - padding: 22px; - border: 1px solid var(--line); - box-shadow: var(--shadow-lg); - display: grid; - gap: 16px; -} - -.hero-panel .panel-title { - font-size: 14px; - font-weight: 700; - text-transform: uppercase; - letter-spacing: 0.2em; - color: var(--accent); -} - -.attention-grid { - height: 120px; - border-radius: 18px; - background: - linear-gradient(120deg, rgba(255, 106, 61, 0.25), rgba(28, 124, 255, 0.25)), - repeating-linear-gradient(90deg, rgba(255, 255, 255, 0.8) 0 1px, transparent 1px 12px), - repeating-linear-gradient(0deg, rgba(255, 255, 255, 0.6) 0 1px, transparent 1px 12px); - position: relative; - overflow: hidden; -} - -.attention-grid::after { - content: ""; - position: absolute; - inset: 0; - background: - radial-gradient(circle at 20% 30%, rgba(255, 106, 61, 0.6), transparent 55%), - radial-gradient(circle at 70% 60%, rgba(28, 124, 255, 0.5), transparent 60%), - radial-gradient(circle at 45% 75%, rgba(20, 184, 166, 0.4), transparent 55%); - mix-blend-mode: multiply; -} - -.meta-grid { - display: grid; - gap: 12px; -} - -.meta-item { - padding: 12px 14px; - border-radius: var(--radius-md); - border: 1px solid rgba(16, 21, 35, 0.08); - background: rgba(248, 250, 252, 0.85); -} - -.meta-label { - font-size: 11px; - letter-spacing: 0.18em; - text-transform: uppercase; - color: var(--accent-2); - font-weight: 700; - display: block; - margin-bottom: 6px; -} - -.meta-value { - font-size: 14px; - color: var(--muted); - line-height: 1.5; -} - -.section-nav { - display: flex; - flex-wrap: wrap; - gap: 10px; - margin: 24px 0 12px; -} - -.section-nav a { - text-decoration: none; - padding: 8px 14px; - border-radius: 999px; - border: 1px solid rgba(16, 21, 35, 0.12); - font-size: 12px; - font-weight: 600; - color: var(--ink); - background: rgba(255, 255, 255, 0.65); -} - -.section-nav a.active { - background: var(--accent-2); - color: #ffffff; - border-color: transparent; -} - -.chapter { - margin-top: 22px; - padding: 24px; - border-radius: var(--radius-lg); - background: var(--card); - border: 1px solid var(--line); - box-shadow: var(--shadow-md); - position: relative; -} - -.chapter::before { - content: ""; - position: absolute; - top: 20px; - left: 20px; - width: 6px; - height: 24px; - border-radius: 999px; - background: linear-gradient(180deg, var(--accent), var(--accent-2)); -} - -.chapter h2 { - margin: 0 0 12px 18px; - font-size: 22px; -} - -.chapter h3 { - margin: 16px 0 8px; - font-size: 16px; - color: var(--accent-2); - font-weight: 600; -} - -.chapter p { - margin: 0 0 12px; - line-height: 1.7; - color: var(--muted); -} - -.chapter ul { - margin: 0; - padding-left: 20px; - color: var(--muted); - line-height: 1.7; -} - -.chapter li { - margin-bottom: 8px; -} - -.highlight-row { - display: grid; - gap: 12px; - grid-template-columns: repeat(auto-fit, minmax(220px, 1fr)); - margin-top: 14px; -} - -.highlight-card { - padding: 14px 16px; - border-radius: var(--radius-md); - border: 1px solid rgba(16, 21, 35, 0.12); - background: rgba(248, 250, 252, 0.9); - font-size: 14px; - color: var(--muted); - text-decoration: none; - display: block; - transition: transform 0.2s ease, box-shadow 0.2s ease; -} - -.highlight-card strong { - color: var(--ink); - display: block; - margin-bottom: 6px; -} - -.highlight-card:hover { - transform: translateY(-3px); - box-shadow: var(--shadow-sm); -} - -.callout { - background: rgba(255, 106, 61, 0.1); - border-left: 3px solid var(--accent); - padding: 12px 14px; - border-radius: var(--radius-sm); - color: var(--muted); - margin-top: 12px; -} - -.formula { - font-family: "IBM Plex Mono", "Space Grotesk", monospace; - font-size: 13px; - background: rgba(15, 23, 42, 0.04); - border-radius: var(--radius-sm); - border: 1px dashed rgba(28, 124, 255, 0.3); - padding: 12px 14px; - overflow-x: auto; - color: #0f172a; -} - -.data-grid { - display: grid; - gap: 12px; - grid-template-columns: repeat(auto-fit, minmax(180px, 1fr)); - margin-top: 12px; -} - -.data-chip { - padding: 12px 14px; - border-radius: 999px; - border: 1px solid rgba(16, 21, 35, 0.12); - background: rgba(255, 255, 255, 0.9); - font-size: 13px; - font-weight: 600; - text-align: center; -} - -.resource-list { - display: grid; - gap: 12px; -} - -.resource-item { - display: flex; - flex-wrap: wrap; - align-items: center; - justify-content: space-between; - gap: 8px; - padding: 12px 16px; - border-radius: var(--radius-md); - border: 1px solid var(--line); - background: rgba(255, 255, 255, 0.9); -} - -.resource-item span { - font-weight: 600; - color: var(--muted); -} - -.resource-item a { - text-decoration: none; - font-weight: 600; - color: var(--accent-2); -} - -.citation { - background: #0f172a; - color: #f8fafc; - padding: 16px; - border-radius: var(--radius-md); - font-size: 13px; - line-height: 1.6; - overflow-x: auto; -} - -.copy-btn { - margin-top: 10px; - border: none; - padding: 8px 16px; - border-radius: 999px; - font-weight: 600; - background: var(--accent-2); - color: #ffffff; - cursor: pointer; -} - -.footer { - margin-top: 42px; - text-align: center; - font-size: 13px; - color: var(--muted); -} - -.reveal { - opacity: 0; - transform: translateY(16px); - transition: opacity 0.6s ease, transform 0.6s ease; -} - -.reveal.is-visible { - opacity: 1; - transform: translateY(0); -} - -@media (max-width: 900px) { - .hero { - grid-template-columns: 1fr; - } -} - -@media (max-width: 640px) { - .page { - padding: 36px 18px 70px; - } - - .hero-panel { - padding: 18px; - } - - .chapter { - padding: 20px; - } - - .chapter h2 { - font-size: 20px; - } - - .section-nav a { - font-size: 11px; - } -}