I've done quite a few scrapers, one to scrape transactions from bank accounts and upload them to a database (it logged in into the provided online banking account and scraped the transactions). I've also done a generic account creator for SEO purposes, though this isn't tecnically scraping, it's web automation. For the same client I created a generic scraper that visited the provided webistes and made recommendations based on the content and structure of the site.
Yesterday I made a Lynda course downloader in Python, though that one was for personal use (I'm too lazy to download them manually :'D). I just provide a list with the courses urls and it downloads them and upload them to Google Drive properly organized.
For screen scraping (simulating real browser behavior) I use CasperJS a framework on top of PhantomJS that is a headless scriptable browser, that is, a browser without a window, it's like opening Chrome and running a script on it, without the additional overhead of a GUI - Graphical User Interface). I've also used Selenium for this same purposes.
When the job needs to be optimized, and can't afford the extra time and memory usage of screen scraping, but do want elegant solutions, I prefer using python with requests + beautiful soup, I've also tried Scrapy for spider like scraping.
If you think my experience fits your requirements, feel free to contact me, I would like to know more about the project, how many websites we are talking about, etc