- Регистрация
- 1 Мар 2015
- Сообщения
- 1,481
- Баллы
- 155
I write blog posts in a number of different places:
And most of those posts get syndicated to other places:
It’s also possible that I’ll write original posts on one of these syndication sites without posting to one of my sites first.
Recently, when revamping my I decided that I wanted to display a list recent posts from all of those sources. But because of the syndication, it was all a bit noisy: multiple copies of the same post, repeated titles, and a poor reading experience.
What I wanted was a single, clean feed — a unified view of everything I’ve written, without repetition.
So I wrote a tool.
The Problem
I wanted to:
is a new CPAN module and CLI tool for aggregating and deduplicating web feeds.
It reads a list of feed URLs from a JSON config file, downloads and parses them, filters out duplicates (based on canonical URLs or titles), sorts the results by date, and emits a clean, modern feed.
How It Works
{
"output_format": "json",
"max_entries": 10,
"feeds": [{
"feed": "",
"web": "",
"name": "Perl Hacks"
}, {
"feed": "",
"web": "",
"name": "Substack"
}, {
"feed": "",
"web": "",
"name": "Davblog"
}, {
"feed": "",
"web": "",
"name": "Dev.to"
}, {
"feed": "",
"web": "",
"name": "Medium"
}]
}
Install via CPAN:
cpanm App::FeedDeduplicator
Then run it with:
feed-deduplicator config.json
If no config file is specified, it will try the FEED_DEDUP_CONFIG environment variable or fallback to ~/.feed-deduplicator/config.json.
There’s also a with the latest version installed.
Under the Hood
The tool is written in Perl 5.38+ and uses the new class feature () for a cleaner OO structure:
It’s all very much a work in progress at the moment. It works for me, but there are bound to be some improvements needed, so it works for more people. A few things I already know I want to improve:
If you want a clean, single-source feed that represents your writing without duplication, App::FeedDeduplicator might be just what you need.
I’m using it now to power the aggregated feed on my site. Let me know what you think!
The post first appeared on .
- has been my general blog for about twenty years
- is where I write about Perl
- is mostly tech stuff but can also wander into entrepreneurship and other topics
And most of those posts get syndicated to other places:
- Tech stuff will usually end up on
- Non-tech stuff will go to
- Occasionally, stuff about Perl will be republished on
It’s also possible that I’ll write original posts on one of these syndication sites without posting to one of my sites first.
Recently, when revamping my I decided that I wanted to display a list recent posts from all of those sources. But because of the syndication, it was all a bit noisy: multiple copies of the same post, repeated titles, and a poor reading experience.
What I wanted was a single, clean feed — a unified view of everything I’ve written, without repetition.
So I wrote a tool.
The Problem
I wanted to:
- Aggregate several feeds into one
- Remove syndicated duplicates automatically
- Prefer the canonical/original version of each post
- Output the result in Atom (or optionally RSS or JSON)
is a new CPAN module and CLI tool for aggregating and deduplicating web feeds.
It reads a list of feed URLs from a JSON config file, downloads and parses them, filters out duplicates (based on canonical URLs or titles), sorts the results by date, and emits a clean, modern feed.
How It Works
- A JSON config file provides the list of feeds and the desired output format:
{
"output_format": "json",
"max_entries": 10,
"feeds": [{
"feed": "",
"web": "",
"name": "Perl Hacks"
}, {
"feed": "",
"web": "",
"name": "Substack"
}, {
"feed": "",
"web": "",
"name": "Davblog"
}, {
"feed": "",
"web": "",
"name": "Dev.to"
}, {
"feed": "",
"web": "",
"name": "Medium"
}]
}
- Each feed is fetched and parsed using
- For each entry, the linked page is scanned for a <link rel="canonical"> tag
- If found, that canonical URL is used to detect duplicates; if not, the entry’s title is used as a fallback
- Duplicates are discarded, keeping only one version (preferably canonical)
- The resulting list is sorted by date and emitted in Atom, RSS, or JSON
Install via CPAN:
cpanm App::FeedDeduplicator
Then run it with:
feed-deduplicator config.json
If no config file is specified, it will try the FEED_DEDUP_CONFIG environment variable or fallback to ~/.feed-deduplicator/config.json.
There’s also a with the latest version installed.
Under the Hood
The tool is written in Perl 5.38+ and uses the new class feature () for a cleaner OO structure:
- App::FeedDeduplicator::Aggregator handles feed downloading and parsing
- App::FeedDeduplicator::Deduplicator detects and removes duplicates
- App::FeedDeduplicator::Publisher generates the final output
It’s all very much a work in progress at the moment. It works for me, but there are bound to be some improvements needed, so it works for more people. A few things I already know I want to improve:
- Add a configuration option for the LWP::Useragent agent identifier string
- Add configuration options for the fixed elements of the generated web feed (name, link and things like that)
- Add a per-feed limit for the number of entries published (I can see a use case where someone wants to publish a single entry from each feed)
- Some kind of configuration template for the JSON version of the output
If you want a clean, single-source feed that represents your writing without duplication, App::FeedDeduplicator might be just what you need.
I’m using it now to power the aggregated feed on my site. Let me know what you think!
The post first appeared on .