About

How this system turns news into judgment

LLM-powered News Flow27 daily issues5 weekly digests14 timeline days
28
Configured feeds
25
Primary intake feeds
3
Watch-only feeds
20
Daily issue cap

RSS Agent is not a generic news site. It is an LLM-driven editorial pipeline that starts from a Folo-led source stack plus multi-category RSS feeds, then moves through gate-keeping, scoring, and interpretation before it becomes a daily issue, weekly digest, podcast script, or platform summary.

The value of the project is not that it shows you more information. The value is that it compresses noise into judgment, organizes fragmented signals into continuous observation, and leaves behind an archive you can revisit, compare, and act on.

This is not a frontend shell that simply renders whatever it grabs. The system first de-duplicates, applies editorial judgment, scores signal quality, and structures the output. What you see on the homepage is the editorial result; what you open in a detail page is the generated Markdown artifact itself.

The goal is not to imitate a newsroom. The goal is to replace the repetitive manual work of sorting news by fixing a stable path from noisy inputs to a research-grade daily issue.

Primary flowFetch -> Gate-Keep -> Score -> Insight -> Daily / Weekly / Podcast -> Publish
Core valueCompress noisy inputs into actionable judgment, then keep high-value signals under continuous watch.
Open the GitHub repository

Every input source is explicitly configured in configs/feeds.json. The pipeline does not discover random sources at runtime. It continuously tracks a known source stack so the archive stays auditable.

Primary discovery layer

Folo List

The old Follow AI 列表 alias still exists in configuration for backward compatibility, but the product-facing explanation now treats the source consistently as the Folo list. It is the single most important discovery entry point in the entire system.

CoreWeight 100Cap 2500
Its role in the system

This is not a single RSS feed. It is a curated, continuously maintained list inside Folo that prioritizes high-value signals across AI, developer tools, macro, investment, and cross-domain trend changes.

Why it is explained first
  • It sits at the front of the discovery chain and plays the role of find first, verify next.
  • The list can mix media outlets, blogs, community feeds, hot lists, and research sources in one place, which gives it broader coverage than any single RSS feed.
  • When a single source updates too slowly or sees too little, the Folo list usually surfaces the move earlier and pushes it into the main intake pool faster.
  • Independent RSS feeds and coverage-repair logic still run afterward, so the system does not turn one hot list directly into editorial truth.
What weight 100 actually means

The weight is not a score for one article. It tells the fetch layer that the Folo list is the highest-priority discovery entrance. If a high-value movement appears there, it should enter the main intake pool earlier than ordinary single-feed updates and then pass through the later de-duplication, filtering, and scoring stages.

BloombergFinancial TimesThe Wall Street JournalXueqiu HotZhihu Hot ListHacker NewsV2EX36Kr 24h HotReadhub Morning BriefBinance AnnouncementsOpenAI NewsThe GitHub BlogGoogle Research BlogGoogle Developers BlogMartin FowlerLast Week in AIAnthropic Newsa16z crypto
11
Core sources

High-priority and higher-trust inputs that are suitable for direct entry into the main pool.

14
Signal sources

Used to broaden coverage and import external signals so the issue is not dominated by one source type.

3
Watch sources

Primarily for trend watching and hot-list monitoring, not guaranteed entry into the main daily pool.

`weight` controls source priority`dailyCap` limits single-feed flooding`keepInMainPool` decides main-pool entry`tier` shapes fetch order and repair strategy

AI

High-priority signals around model capability, product form factors, research progress, and industry direction.

7 feeds
Folo 列表
Core · Weight 100 · Cap 2500
Main pool
OpenAI Blog
Core · Weight 95 · Cap 8
Main pool
Google DeepMind
Core · Weight 90 · Cap 6
Main pool
机器之心
Signal · Weight 90 · Cap 12
Main pool
Hugging Face Blog
Core · Weight 85 · Cap 8
Main pool
量子位
Signal · Weight 85 · Cap 10
Main pool
Last Week in AI
Signal · Weight 80 · Cap 4
Main pool

Technology

External shifts from major platforms, infrastructure, consumer tech, and large-company strategy.

4 feeds
Hacker News
Signal · Weight 85 · Cap 15
Main pool
Solidot
Signal · Weight 75 · Cap 8
Main pool
TechCrunch AI
Signal · Weight 75 · Cap 10
Main pool
Ars Technica
Core · Weight 70 · Cap 10
Main pool

Software Engineering

Developer tools, open source, frameworks, and changes in engineering productivity.

3 feeds
GitHub Blog
Core · Weight 70 · Cap 6
Main pool
LangChain Blog
Signal · Weight 65 · Cap 6
Main pool
HelloGitHub
Signal · Weight 60 · Cap 4
Main pool

Business

Company strategy, earnings, market structure, and industry-chain changes worth tracking.

3 feeds
FT中文网
Core · Weight 80 · Cap 10
Main pool
路透中文网
Core · Weight 75 · Cap 10
Main pool
日经中文网
Signal · Weight 65 · Cap 8
Main pool

Policy

Regulation, geopolitics, export controls, and structural policy variables.

4 feeds
纽约时报中文网
Signal · Weight 70 · Cap 8
Main pool
BBC News 中文
Signal · Weight 70 · Cap 8
Main pool
澎湃新闻
Signal · Weight 65 · Cap 8
Main pool
联合早报
Signal · Weight 60 · Cap 8
Main pool

Social

Trending topics and public discussion used as a watch layer, not as the main editorial driver.

3 feeds
微博热搜
Watch · Weight 60 · Cap 8
Watch
知乎热榜
Watch · Weight 60 · Cap 8
Watch
X: Elon Musk
Watch · Weight 55 · Cap 5
Watch

The first goal is not to fetch more. It is to remove duplicates, thin signals, and non-judgmental noise before filling the coverage gaps that still matter.

01

Fetch the main intake pool

The pipeline pulls raw items from the last 24 hours according to source priority and per-feed caps, including RSS feeds, the Folo list, and watch-layer sources.

02

Pre-filter and event compression

Similar headlines and repeat coverage are compressed into event candidates first, so duplicate reporting does not distort later scoring.

03

Coverage repair

If a category looks thin on a given day, the pipeline fetches targeted backup coverage so the report is not captured by a single track.

04

Editorial agenda plus gate-keep

The system decides the daily editorial agenda first, then produces PASS, DROP, or MERGE decisions to keep only the items that deserve real space in the issue.

After gate-keeping, the system does not simply sort by heat. It scores each candidate across six dimensions and then combines that result with category strategy.

Signal Strength24%

Whether the item actually changes your view of where an industry, market, or platform is moving.

Future Impact23%

Whether it is likely to matter over the next few days or weeks, instead of being only today's noise.

Personal Relevance18%

How directly it connects to the core tracking surface: AI, products, engineering, and investment.

Decision Usefulness17%

Whether it helps the next action: follow-up tracking, hypothesis building, or judgment revision.

Credibility12%

The quality, completeness, and verifiability of the source material behind the item.

Timeliness6%

Its urgency within the last 24 hours and whether it deserves space in the current daily issue.

Extract the most valuable information from the last 24 hours and prioritize the signals that best help judge what may matter next.

AI
Category weight 1.00 · Minimum coverage 3
Investment
Category weight 0.95 · Minimum coverage 4
Technology
Category weight 0.75 · Minimum coverage 2
Software Engineering
Category weight 0.70 · Minimum coverage 2
Business
Category weight 0.60 · Minimum coverage 1
Policy
Category weight 0.55 · Minimum coverage 1
Social
Category weight 0.25 · Minimum coverage 0

Themes that must stay under watch

  • Changes in AI model capability and product form factors
  • Risk appetite and capital-flow direction in financial markets
  • Key policy and regulatory shifts
  • Structural moves in platforms, infrastructure, and chips
  • Developer tooling and engineering productivity leaps

Selection principles

  • Allow major policy, macro, financial, or geopolitical events to outrank AI on the days when they clearly matter more.
  • Prefer signals that help judge the next few days or weeks instead of items that are only locally hot.
  • Do not chase exhaustive article-by-article coverage; use the opening and closing sections to hold the trend judgment together.
  • Avoid traffic-heavy stories that generate attention but add little decision value.

Once high-scoring items are confirmed, the system can finally turn them from raw news into a readable daily structure.

Fetch -> Gate-Keep -> Score -> Insight -> Daily / Weekly / Podcast -> Publish

1. Fetch and pre-compress

The day starts with configured feeds only, then similar stories are compressed so the candidate pool reflects events rather than headline volume.

  • `fetchPrimary` collects raw entries from the last 24 hours.
  • `preFilter` collapses duplicated or closely related coverage into observed events.
  • `tier`, `weight`, and `dailyCap` keep feed priority and flooding under control.

2. Repair coverage and set the agenda

When a category is under-covered, the pipeline adds targeted sources, then decides what the day is actually about before final ranking.

  • `fetchCoverage` fills deficit categories to preserve range across topics.
  • `editorialAgenda` defines the daily framing, must-cover themes, watch signals, and category boosts.
  • This is the point where the issue becomes an editorial product instead of a heat-ranked feed.

3. Gate-keep and score

Candidates are first filtered and merged, then scored across multiple dimensions so only meaningful items reach the issue.

  • `gateKeep` returns PASS, DROP, or MERGE decisions in batch, with heuristic fallback when needed.
  • `score` applies six weighted dimensions plus category weights, daily boosts, and must-cover bonuses.
  • Minimum category coverage is preserved so one track cannot consume the full page.

4. Generate and distribute

High-scoring items are turned into structured interpretations, then rendered into the daily report and downstream distribution outputs.

  • `insight` converts items into one-liners, event frames, interpretations, and deep-dive candidates.
  • `generateDaily` writes the Markdown issue and preserves the candidate pool for continuity.
  • `podcastGen`, `platformsGen`, `publish`, and `notify` handle scripts, platform copy, publishing, and alerts.
What the final outputs are

Daily detail pages render the generated Markdown directly. Weekly issues extend the same structure, while the homepage is responsible for browsing and historical recall. The frontend does not reinterpret the content itself. It organizes generated artifacts into a readable interface.

Why this project exists

Compress noisy inputs into actionable judgment, then keep high-value signals under continuous watch.