Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery

Re: Export (extract) Mozilla Firefox Bookmarks

by jimhenry (Novice)
on Aug 18, 2020 at 21:28 UTC ( #11120886=note: print w/replies, xml ) Need Help??

in reply to Export (extract) Mozilla Firefox Bookmarks

Thanks for the nifty tool. I used this as the basis for a script to dump or analyze Firefox history. It has worked fine so far except that one time, after doing some reports, I started Firefox and got an error about the bookmarks and history being unavailable because the database was locked. I'm not sure why the script didn't unlock the places.sqlite database on closing, but restarting Firefox fixed the problem, and since then I've run the history analysis script and started Firefox again without error messages.

#! /usr/bin/perl use strict; use warnings; use DBD::SQLite; use feature 'switch'; no warnings 'experimental::smartmatch'; =pod Firefox history analyzer -- print all domains visited and the number of times visited, or a dump of all history URLs in chronological order based on firefox bookmarks exporter by jdporter =cut sub usage { print <<USAGE; Usage: $0 [path-to-Firefox-profile/places.sqlite] [command] Command is one of: h -- list all history by date order. d -- List domains and visit counts, sorted by most often visited. Firefox must be closed for this to work, or you'll get "Database locke +d" errors. USAGE } sub unique_domains; sub list_visit_dates; my $dbfile = shift; $dbfile or usage, exit; -r $dbfile or die "Unreadable $dbfile\n"; $dbfile =~ /\bplaces\.sqlite$/ or die "File should be places.sqlite\n" +; my $dbh = DBI->connect("dbi:SQLite:dbname=$dbfile","","") or die "Erro +r opening db $dbfile\n"; my $history = $dbh->selectall_hashref( q( SELECT, visit_date, url FROM moz_places, moz_historyvisits WHERE = moz_historyvisits.place_id ), 'id' ); given ( $ARGV[0] ) { when ( undef ) { usage; } when( 'h' ) { list_visit_dates; } when( 'd' ) { unique_domains; } default { usage; } } sub unique_domains { my %domains; for my $k ( keys %$history ) { if ( $history->{$k}{url} =~ m! \w+:// ([^/]+) !x ) { $domains{ $1 }++; } } for my $d ( reverse sort { $domains{$a} <=> $domains{$b} || $a cmp + $b } keys %domains ) { printf "%4d\t%s\n", $domains{$d}, $d; } } # indicates that # visit_date is a million times the Unix epoch date (with potentially # microsecond accuracy on some machines?) sub list_visit_dates { for my $k ( sort { my $c = $history->{$a}{visit_date} // 0; my $d = $history->{$b}{visit_date} // 0; $c <=> $d || $history->{$a}{url} cmp $history->{$a}{url} || $a <=> $b } keys %$history ) { my $t = $history->{$k}{visit_date} // 0; next unless $t; $t /= 1_000_000; printf "%20s %s\n", scalar localtime($t), $history->{$k}{url}; } }

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11120886]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (3)
As of 2022-09-24 23:48 GMT
Find Nodes?
    Voting Booth?
    I prefer my indexes to start at:

    Results (116 votes). Check out past polls.