Abstract Away the
Fetching ProcessThe screenshot in
Torsten's blog shows him using RSS Bandit to
monitor his
Windows Event Log. Now I thought this was
particularly interesting because I wrote most of
the code that does the processing of RSS feeds and
beside a tweak or two by Torsten to handle local
file:// URLs there were no changes that would
support such functionality. I thought about it a
little bit and decided that Torsten must have
written an app that monitors his event log then
writes the current state at periodic intervals to a
local RSS file which Torsten is subscribed to. Of
course, I might be wrong but since Torsten is on
vacation he isn't here to correct me so I just hope
my assumption is right.
My mind started to race because I thought a
number of things simultaneously on digesting the
screenshot
- As long as information can be identified
(preferrably as a URI) and represented as an XML
infoset there is no reason why an
RSS information aggregator
shouldn't be able to subscribe to it.
- The user should be shielded from the
differences between "actual" XML files fetched
from the web via HTTP and "virtual" XML retrieved
by non-traditional means.
- The same way some Windows apps today can
consume data from any ODBC data source (e.g.
Crystal Reports) it would be interesting if
the same could be done with RSS data sources or
even more generally XML data sources.
I'm not sure how far I'll take this idea or even if
I'll start working on it but I will at least make
sure that the the RSS processing code abstracts
away everything that has to do with how the
information is fetched from the GUI. This should
already be the case modulo some try...catch blocks
for WebException but I need to make sure.
Besides the engineering problems (hah, coders
are so pretentious) there are also some usability
issues that need to be solved. Currently when wants
to subscribe to a new feed there is a dialog box
where the user enters the name of the feed, its
HTTP or file URL, and what category it should be
placed in. Now how do I make this model work in a
world where the user can subscribe to their
machine's event log or the contents of a network
share via some "RSS data provider"? The only thing
that comes to mind is using custom URI formats but
I somewhat suspect that the web architecture
weenies like Tim Bray would
shit a brick. Interesting stuff.
Abstract Away the Caching ProcessI've
mentioned in the past that I'd like to add an
abstract caching layer to RSS Bandit. Currently my
caching code is all hard wired to assume the cache
is on the file system but the closer we get to
shipping
Yukon the more I want that at least on my
machine I use it as my cache. Abstracting away the
cache seems fairly straightforward for my current
implementation, all I do is load and save files
(somewhat analogous to HTTP GET & PUT).
Creating a CacheManager that supports both these
operations then creating specific implementations
like FileCacheManager & YukonCacheManager seems
like the thing to do.
The complexity comes in when I want to do more
with the cache though. I guess I'll cross that
bridge when I get to it.