Sayonara, Barter
http://sb.antigravities.net/
I figured I'd post this here in case anyone's interested: in the event that another meteor hits Barter.vg, I've started to keep an archive of every offer made, since nobody else has done it. The project is called "Sayonara, Barter.vg", and acrappy frontend (that I plan on improving at some point) to the collected data is here (keep in mind it downloads ~25MB in compressed data, mobile users beware!). I've written a new semi-buggy backend that you should use, here. It is also very far from complete.
If you'd like to conduct your own science experiment with the data, I've made the source to the scraper open and the data files available. Here are offers.lz (huge, ~17mb - an archive of all of the offers), users.lz (big, ~5mb - an archive of basic user information), and items.lz (tiny, ~200kb - an archive of every item that's been mentioned in an offer). A full scrape took approximately 20 hours, but your results may vary. I plan to re-scrape about every week.
Extracting the data to JSON is relatively easy if you have access to Node.js. SimplyCode: [Select] and then
Code: [Select]
Have fun! If anyone notices anything wrong with the scraped data, please let me know so I can fix it. Also let me know if you're doing anything interesting with it, I'd love to hear about that!
http://sb.antigravities.net/
I figured I'd post this here in case anyone's interested: in the event that another meteor hits Barter.vg, I've started to keep an archive of every offer made, since nobody else has done it. The project is called "Sayonara, Barter.vg", and a
If you'd like to conduct your own science experiment with the data, I've made the source to the scraper open and the data files available. Here are offers.lz (huge, ~17mb - an archive of all of the offers), users.lz (big, ~5mb - an archive of basic user information), and items.lz (tiny, ~200kb - an archive of every item that's been mentioned in an offer). A full scrape took approximately 20 hours, but your results may vary. I plan to re-scrape about every week.
Extracting the data to JSON is relatively easy if you have access to Node.js. Simply
npm install lz-string
var fs = require("fs");
var lz = require("lz-string");
["offers", "users", "items"].forEach(function(v){
fs.writeFileSync(v + ".json", lz.decompressFromEncodedURIComponent(fs.readFileSync(v + ".json").toString()));
});
// Ta-da, you should now have ~125M of JSON in the current directory. Isn't file compression amazing?
Have fun! If anyone notices anything wrong with the scraped data, please let me know so I can fix it. Also let me know if you're doing anything interesting with it, I'd love to hear about that!