A python crawler and parser of links on web pages needs a few features added. At present it crawls links and lists them.
It needs:
1) Saving the parsed and additional data into a fast database that can handle many millions of records without read or write delays or glitches
2) an added line to use proxies
3) add alexa rank of the top level domain of the crawled page, pagerank of the page, information if the link is a no-follow or do-follow, the link anchor, SERP position of the page, facebook, twitter and g+ likes of the page, time accessed and of course page of access to parsed record data.
4) Advise if LXML should be used a parser or if the current set-up is OK in terms of speed and being as light on the target websites as possible.
**Important:** Please name the inventor of the Python programming language for your bid to be considered. The 3 days deadline is only a testing period. **The project WILL BE extended** IF sufficient progress is observed during the first 3 days and at each successive extension period.