Find Jobs
Hire Freelancers

Create a web spider as linux daemon

$250-750 USD

Adjudicado
Publicado hace más de 10 años

$250-750 USD

Pagado a la entrega
A linux deamon should spider intranet websites and extract some data. The base urls of the intranet servers are given as ([login to view URL], [login to view URL] ... [login to view URL]). A C++ application (deamon) should be built with the following interface which allows to manage/create a list of pages (urls): - add a host to be spidered (going through all pages on this site, creating a list of the pages of a site) - add a single url to be spidered (adding it to the list of pages of a site) - remove a host (not to be spidered in future, deleting all related xapian data and lists of pages) - remove a single url, all of the related xapian data and removing it from the list of pages to be spidered - allow to set a list of url parameters that should be ignored (session ids for example) - specify a time interval after wich an already spidered url has to be spidered again - specify a time interval for following calls on a site-IP, preventing to "overload" it - specify a max_depth parameter, defining how deep the site should be crawled - for each site host, an according process should do this job. e.g. 10 site-IPs to spider -> 10 processes The interface should allow to define: Spider all urls from [login to view URL], all from [login to view URL] except [login to view URL] plus spider only [login to view URL] The processes which spider through the list of pages should... - get the content of each url, splitting it into text (content without html tags) , encoding (charset), title, canonical url and description (from meta info), current date+time*. - give this data to a different application through a function call. The spider should not come into infinite loops, therefore it has to check, if the raw site content of an url is identical of an url with some different parameter. If possible, it should use the canonical tag for this. To determine, if a site has already been spidered, the according process can "ask" (function call) if the url has already been spidered (based on the data extracted with *), and if yes, if it was more than max_interval days ago. Yes: spider again and get data, no: continue with next url. Starting points: - [login to view URL] - [login to view URL] - [login to view URL]
ID del proyecto: 4979819

Información sobre el proyecto

5 propuestas
Proyecto remoto
Activo hace 11 años

¿Buscas ganar dinero?

Beneficios de presentar ofertas en Freelancer

Fija tu plazo y presupuesto
Cobra por tu trabajo
Describe tu propuesta
Es gratis registrarse y presentar ofertas en los trabajos

Sobre este cliente

Bandera de SWITZERLAND
Eichberg, Switzerland
5,0
3
Miembro desde sept 25, 2011

Verificación del cliente

¡Gracias! Te hemos enviado un enlace para reclamar tu crédito gratuito.
Algo salió mal al enviar tu correo electrónico. Por favor, intenta de nuevo.
Usuarios registrados Total de empleos publicados
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Cargando visualización previa
Permiso concedido para Geolocalización.
Tu sesión de acceso ha expirado y has sido desconectado. Por favor, inica sesión nuevamente.