Your skill will accomplish what the force of many cannot |
|
PerlMonks |
Re: Web monitoring with Perlby haukex (Archbishop) |
on Jan 24, 2017 at 11:09 UTC ( [id://1180208]=note: print w/replies, xml ) | Need Help?? |
Hi predrag, How it is about Perl installation and version? I expect Perl 5.10.1 While that Perl version may be "good enough", it's a little over 7 years old, so you should definitely look into installing a newer version. Is 1 or 2GB RAM enough for this my application? Of course that'll depend on the software you use, but just as a guess I'd say that it's probably enough for a start. Just in case, it would be good if your cloud provider offers you an easy way to increase that later if necessary. 2. The opposite: To create a small web server with PICs on the remote place, with MySQL base, so the website server should have to connect to it and read data two times a day. While possible, I would say that it's probably better for the sensors to push their data to a central location (web server). That way, if you later have several sensor systems, the web server doesn't have to reach out to multiple remote locations to fetch the data. I know about modules DBI and DBD:: mysql. Any recommendations for these or something other? For your application I'd say MySQL is a good database, and those two modules are "the" modules for accessing databases. There are nowadays also more advanced modules, such as DBIx::Class, but I've found those are worth the effort when saving complex object-oriented data structures, whereas it sounds like your data will be mostly row-based with probably only one table or two. In that case DBI will be fine. Note: Make sure to use placeholders! a cron job calling that Perl script that generates the charts twice a day and saves them as image files. ... But if I would like to do monitoring during the whole year, I have to find a solutions for that. Generating static images and HTML pages is of course somewhat limited, but it's not all too difficult to write a CGI script* that allows the user to query and display certain ranges of data in a table dynamically. This could be done in the "traditional" way of having an HTML form, which is submitted to a CGI script which then fetches the data from the database and generates the HTML to send back to the browser. That's probably easiest for a first version, but later you could implement a more modern dynamic webpage where the JavaScript code (with something like jQuery) fetches the data in a format such as JSON and renders it in the browser dynamically. I wonder if there is anything between Perl and PICs? ... Instead I could have a computer there, accept data to the serial port (?) and write Perl scripts that read data and present in a form of graphs and tables on HTML pages. I would nowadays strongly suggest a small, single-board computer such as the Raspberry Pi, instead of a microcontroller like the PIC. I'm a little biased since I've been working with them for several months now to develop a data logging system (and I've released my software under the GPL), but there are also lots of other similar single-board systems. I think the advantage of such systems is that they feature full OSes, and most of the Linux software that runs on your PC will also run on an RPi, including Perl. Just a note, since RPis use SD cards to store data, backups of the data are important since SD cards are known to fail from time to time, especially when written to often. You asked about the serial port, but in my experience it's easiest to use USB. Of course there are USB-to-Serial adapters which work fine with the RPi, so when selecting your sensors I'd recommend looking for ones that feature either USB interfaces out of the box, or at least RS-232/RS-485 so you can use an adapter. Many of these sensors will appear on the Linux system as a virtual serial port, so reading from them is just like reading from a regular serial port, but some require special drivers, this is also something to look out for when selecting sensors. So here's an idea for the system architecture, based partially on my experience with the data logging system I'm currently working on. This is, of course, only One Way To Do It, there are lots of alternative ways to design the system. As mentioned, the sensors (USB, or RS-232/RS-485) would be connected to the RPi, where they can be read in one of two ways. First, for very low data rates (like twice a day, or hourly), probably a cron job that calls a Perl script which reads one sensor value and posts it to the web server is enough. Second, for higher data rates, it's probably better to stay connected to the sensor, continually reading from the serial port and then processing the data as needed: writing it to a text file, a database, etc. (this is what my code does) and then sending averages or summaries to be displayed on the web server. But since you said that for now, updates to the data will be infrequent, and because a cron job is easier to set up than a daemon, I'll assume the first solution. So on the RPi, the script would have the tasks of connecting to the serial port (e.g. Device::SerialPort), if necessary requesting a value from the sensor, reading that value out, and if necessary parsing it into a usable format. Then, it would need to submit that data to the web server, for example with the standard module LWP::UserAgent, most likely with a POST request. A common data exchange format is JSON. On the web server side, you'd have one CGI script* that accepts these data values - it should be protected with some kind of authentication mechanism to make sure only the RPis can submit data. This script could then enter the sensor values into the database. Second, there'd need to be another script that reads values from the database and generates the charts, tables, etc. How this script gets triggered depends on how real-time you want your data updates to be, but with data updates only twice a day, I think a cron job timed to run a little while after the RPis would have submitted their data is probably enough for a first version. For example, the RPis could be configured to submit their data at 11:45 UTC, and then web pages are generated at 12:00 UTC - note all systems should use NTP. A more advanced later version might have the updates to the database trigger the re-generation of the web pages. But the fairly simple set-up with cron would already allow your users to view "near-real-time" data using the static images and HTML files. A next step might then be a more dynamic web interface, as already described above. * Note that when I say "CGI script", I don't mean specifically the CGI module, for new projects it's better to use one of the more modern web frameworks I mentioned in my previous post, such as Mojolicious. Much of my suggestions describe a fairly simple mechanism because you said that this would be your first web based project, plus your data updates would be relatively infrequent. In several places I mentioned some future improvements, that might become necessary if you have data updates faster than roughly every half hour or so, or a larger number of sensor systems submitting data - maybe more than 10 sensor systems, but that's only an order of magnitude estimate. But hopefully this is one good place to start. Hope this helps,
In Section
Seekers of Perl Wisdom
|
|