in reply to Web Scraping
Break your pipeline into chunks and define a suitable interface for inputs and outputs at each stage and it doesn't matter how any particular phase of the pipeline is implemented so long as it's producing correctly formatted products. Somewhat handwave-y but based on your example you have two or maybe three steps:
Working backwards, you might figure out a JSON representation of your stored DB format. The munging stage would need to produce that from the raw data. Likewise have the munge stage expect a common raw input format and it doesn't matter what's doing the actual scraping (e.g. if you find a python Twitter source that works best for something you can have that feed results along side something producing results with WWW::Mechanize from something that perl easily scrapes). You could also have a scraper go ahead and produce the canonical DB format directly if that's appropriate.
The cake is a lie.
The cake is a lie.
The cake is a lie.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Web Scraping
by bliako (Abbot) on Jul 12, 2019 at 23:34 UTC |