The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Nga 365,432 vlerësimet, klientët vlerësojnë Web Scraping Specialists 4.9 nga 5 yje.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Nga 365,432 vlerësimet, klientët vlerësojnë Web Scraping Specialists 4.9 nga 5 yje.I need a reliable, detail-oriented person to capture information from three online sources—public websites, subscription-based databases, and emails I forward—then enter it into the spreadsheet template I supply. All content must be copied exactly as shown at the source, double-checked for accuracy, and submitted on schedule. You should already be comfortable navigating web pages, extracting records from search results or dashboard views, and working quickly in Excel or Google Sheets without sacrificing precision. If you can keep data perfectly organized and error-free, let’s get started right away.
The job requirement is I need the list of all the schools in Hyderabad, if possible along with their strength, their mail id, their contact details In Excel Choose the person who are capable of doing data mining, data extracting, and sorting Data mining: List of all schools in Hyderabad (Excel) — contact details, email, and student strength Full job description (copy-paste) I need an experienced data-mining / web-scraping freelancer to compile a comprehensive, deduplicated list of schools in Hyderabad, India. Deliverable: a clean Excel (.xlsx) file with one row per school and the fields specified below, plus a sources list. Quality, accuracy and clear documentation of sources are essential. Required fields (columns) School name Full address (street/locality, Hyderabad, Telangana,...
The job requirement is I need the list of all the schools in Hyderabad, if possible along with their strength, their mail id, their contact details In Excel Choose the person who are capable of doing data mining, data extracting, and sorting Data mining: List of all schools in Hyderabad (Excel) — contact details, email, and student strength Full job description (copy-paste) I need an experienced data-mining / web-scraping freelancer to compile a comprehensive, deduplicated list of schools in Hyderabad, India. Deliverable: a clean Excel (.xlsx) file with one row per school and the fields specified below, plus a sources list. Quality, accuracy and clear documentation of sources are essential. Required fields (columns) School name Full address (street/locality, Hyderabad, Telangana,...
AI Chatbot & Automation Developer | Customer Discovery + Zoho Integration + WhatsApp + Email Marketing About Us: We are Florence International Computer Trading LLC, a Dubai-based wholesaler and importer of laptops, desktops, and IT products. Our online platform, , serves both retail and wholesale clients worldwide — offering premium IT products, competitive pricing, and reliable service. Project Overview: We’re seeking a highly skilled AI Developer / Automation Engineer to build a smart system (Bot or Application) that can: • Automatically find and target potential customers online (without preloaded data). • Integrate with Zoho Books to access live product & stock details. • Send personalized product offers via WhatsApp and Email. • Chat automatical...
Title: Google Maps Data Scraper – Full Country Business Extraction Project ⸻ Description: I’m looking for a specialist in Google Maps data extraction who can scrape and deliver complete business datasets for two small countries. The project requires: • Extracting all active businesses within specific industry groups (details shared privately after shortlisting). • Pulling data directly from Google Maps — not from third-party databases or APIs. • Delivering clean, structured data in Excel (.xlsx) with all available public fields (name, address, contact, rating, reviews, hours, website, coordinates, social links, etc.). This job is not about leads — it’s about accuracy, completeness, and data structure. If you use browser automation tools...
I need a simple, low-cost way to pull thoroughbred racing form from the official racing websites and drop it straight into my existing Excel template whenever I choose. The process must run on an ordinary New Zealand Windows laptop—no paid cloud services, no expensive subscriptions—so a lightweight Python or Power Query approach would suit me fine as long as I can launch it with a click. Core requirements • Source: only the official racing websites I will specify. • Output: a fully populated Excel workbook that keeps my current column structure. • Data points: everything listed in the attached brief (horse numbers, dates, jockey & trainer details, race conditions, etc.). • Speed: each race download should finish in seconds. • Repeatability:...
I need a simple, self-contained bot that I will fully own and can build on later. For now, its job is straightforward: • Scan a list of audio links (either pulled from my website’s pages or a supplied JSON file) and flag any audio links that are dead and have no audio. • Do the same for embedded videos, letting me know if there is no picture, no sound, or the embed is dead. • Produce a concise report in either PDF or CSV format at the end of each run. I am flexible on tech: Python, JavaScript, or any stack you feel is quickest for a lean first version. Clear, well-commented source code and a short README that explains how to run it locally on my Linux Mint pc. Because my current budget is tight, I can only work with a budget under $100 USD.
Dispongo de un aplicativo de scrappeo web ya funcional; lo he usado en modo local y en el video () puedes ver cómo opera en el PDF instructivo. Ahora necesito dar el siguiente paso: montarlo en mi dominio con hosting activo y dejar toda la integración completamente documentada. Alcance principal 1. Integrar el scraper en el sitio (no hay CMS: partimos de servidor limpio). 2. Redactar una guía clara que cubra: • Pasos de instalación y despliegue en el hosting. • Flujo de autenticación de usuarios (login) y gestión de accesos temporales por link. • Limitaciones técnicas actuales y potenciales mejoras detectadas. • Procedimiento completo (ver Ayuda memoria adjunta). 3. Implementar la vista web donde ...
I need a purpose-built tool that lets my travel company push both text files and PDF files to roughly 100–200 social-media-style sites in one go. The workflow should be simple: I pick or drag-and-drop the files, paste in the list of destination URLs (or load them from a CSV), hit “Submit,” and the software handles the rest. Because our focus is only on rapid indexing with Bing, I do not need any post-submission tracking or reporting—just reliable, simultaneous uploading. Key functions I expect • Accepts text and PDF inputs in bulk • Handles 100–200 destinations per batch without throttling issues • Lets me store site credentials or API keys securely for future runs • Runs on Windows; a lightweight desktop app or a well-documented s...
I need a precise online data scraping task completed. The goal is to pull contact details—name, email, phone (where available)—directly from a list of company websites I will supply. Scope • Visit each URL, locate the public-facing contact information. • Extract and record the data in a clean Excel or CSV sheet with separate columns for company name, contact person (if listed), email, phone, and website link. • Verify that emails and phone numbers are valid and free of obvious typos. This is a straightforward assignment, so I’m looking for fast turnaround and high accuracy. If you have experience scraping company sites without triggering blockers and can deliver a tidy spreadsheet ready for upload, I’d love to work with you.
I’m looking for an experienced Python developer with strong Selenium skills to build a reliable, well-documented script that can create an X (Twitter) account using a proxy and an email address that I will provide. This project is for legitimate automation/testing purposes only — the final deliverable must respect X’s Terms of Service and must not be used for spam, evasion, or abusive activity. Key requirements (must-have) Python 3.9+ implementation using Selenium (or Selenium + undetected-chromedriver if needed). Support for HTTP(S) & SOCKS proxies (able to accept proxy credentials). Accept an email address (or list of addresses) I provide and use it for registration (assume email confirmation step; script should optionally wait for or poll IMAP/POP3/SMTP to fetch OT...
Job Title: Data Extraction from Website using Custom URL Pattern (Web Scraping Task) ________________________________________ Project Description: I need a freelancer to perform data extraction from a website following a specific URL pattern. The process involves visiting a given URL, identifying and copying specific data fields, and then repeating the same steps by modifying the URL as shown in the provided instruction video. The video clearly demonstrates the exact steps and structure to follow — no guesswork required. You’ll just need to replicate the process systematically and accurately. ________________________________________ Key Responsibilities: • Access URLs in a specific format as demonstrated in the video. • Extract defined data fields from each page (deta...
I’m taking back control of a web application that was built through an external IT firm and now need a technically hands-on partner to own it day-to-day. The immediate priority is a smooth hand-over: reviewing the current codebase, getting the infrastructure under version control, and making sure production, staging, and backups all run reliably. Where I most need help is on the data side—tight, reliable database management that can also talk to Excel when required—and on bringing in just enough AI to streamline content creation and keep the product feeling fresh. The platform already leans on multiple integrations: payment gateways, social media hooks, and several third-party data feeds are live or queued for rollout, so confidence in working with RESTful APIs is ess...
We are looking for an experienced developer to help migrate and deploy our existing plugins/extensions across Shopify, Magento, and WooCommerce platforms. This includes adapting current functionality, ensuring platform compatibility, and supporting end-to-end implementation. Responsibilities: Migrate and deploy existing plugins/extensions to Shopify, Magento, and WooCommerce Customize and optimize plugins as needed for platform compatibility Ensure smooth integration with themes, checkout flow, and data models Set up staging environment, conduct testing, troubleshoot issues Document processes and provide deployment support Strong experience with Shopify, Magento, and WooCommerce development
I need a small Python bot that runs 24/7 online (not just on my laptop) and monitors news feeds for rare-earth–related policy events (e.g. export controls from China, U.S. DoD funding, Australian project approvals). When it spots a matching headline, it should log it and show me the key details. This is not a trading bot — it’s a news-to-signal prototype for policy–time-zone Please start your bid with GREEN ONIONS or I’ll assume you didn’t read this. Short Overview I need a small Python bot that runs 24/7 online (not just on my laptop) and monitors news feeds for rare-earth–related policy or export headlines (for example China export controls, US DoD funding, or Australian project updates). When it spots a matching headline, it should log it and sh...
I’m building a small personal project that lets me run OpenAI models against structured datasets and view the responses in a clean, browser-based interface. Here is what I need done within the next month: • A lightweight web UI (single-page is fine) that lets me paste or upload a data block, or a file, hit “Process”, and instantly see the AI-generated output. • A backend endpoint wired to my OpenAI account. Your code should take the incoming data, craft the prompt, call the model, and return the response. • A formatter that converts the raw OpenAI answer into my internal JSON schema (I’ll share the exact field list once we start). • Clear setup instructions plus commented code so I can extend or self-host the solution later. You’re ...
I need a simple, one-off Google Maps scrape that gathers wedding photographer / wedding photography listings from all 50 U.S. states and drops everything into a clean CSV file. Columns required • Contact information – name, phone, site (and email if Maps exposes it). • Business details – business hours, current business status, total reviews. • Geographical details – full address broken into street, city, state, postal code, country, plus latitude and longitude. • Extras – main photo URL, street-view image link, and the Google Maps location link. Scope & expectations – A basic, ready-to-use CSV is all I need; no front-end or ongoing service. – Detailed photo and street-view data are important, so please capture the p...
I want a clear, up-to-date snapshot of entry-level engineering apprenticeships that a 16-year-old school-leaver can apply for right now in England. Please search mainly through Indeed and individual company career sites, filter out expired adverts or college courses, and collect a minimum of 25–30 live vacancies. Create a single Google Sheet and, for each listing, fill in: • Company name • Direct link to the apprenticeship page • Location (town/city and county) • Application deadline • Entry requirements (GCSE subjects/grades or other prerequisites) • A brief one-sentence summary of what the role involves Before you finish, re-check every link to confirm the post is still accepting applications; accuracy here is essential. As long as the ...
I have a Python script that currently processes real estate data and I need it streamlined before pushing the results into Google BigQuery. Here’s what I want done: • Remove any unused functions so the file stays lean. • Optimize the functions we actually use—speed and readability both matter. • Refactor variable and function names to a clear, consistent style. Once the code is tidy, finish by loading the processed data into my existing BigQuery and Github dataset. I’ll provide repository access, sample data, and the target table schema. A quick walkthrough of what you changed plus instructions to rerun the pipeline on my side will wrap things up.
During a recent personal search I discovered that Google is indexing publicly accessible photos of my car together with its VIN number and some incorrect information. I need those images and details erased from every Google surface—Search, Images, cached pages, and any other property the company controls. I want a professional who already knows the ins-and-outs of Google’s Content Removal and DMCA request procedures to step in, draft the necessary requests, submit them, follow up with Google Support, and see the process through until the content is gone. If Google requires proof of ownership or identity, you’ll walk me through supplying it securely. Deliverables • Written confirmation or case IDs from Google for each request • Screenshots showing the photos ...
I need a lightweight FastAPI-based scraper that can pull three key data points—product listings, price updates, and current inventory status—from several grocery-focused online shops. The script should: • Crawl each target site, extract the required fields, and normalize them. • Save the results directly into my existing PostgreSQL database (please include a simple schema and any needed migrations). • Expose a minimal FastAPI endpoint so I can trigger a fresh scrape or check the last run status. • Utilize docker, parallel workers and proxies. I am aiming for a basic, working prototype I can run from the command line or via a scheduled job. Keep external dependencies lean and document setup, environment variables, and a sample run so I can reproduc...
I’m looking for a reliable data-sourcing pro to compile 50,000 fresh Office 365 user leads in the United States. Every record must contain: • Full name • Business email address (no info@ or sales@) • Company name • Current job title The ideal contacts work in biotechnology—research labs, biopharma firms, medical device startups, and similar organizations that rely on Microsoft’s cloud stack. Quality requirements • Recent and validated emails with ≤3 % bounce rate (use NeverBounce, ZeroBounce, etc.) • No duplicates, role-based inboxes, or outdated titles • Delivery as a clean CSV or Excel file, ready for import Preferred approach I’m fine with whatever mix of LinkedIn Sales Navigator, ZoomInfo, Apollo, Clearbi...
I’ve hit a dead end trying to track down one very specific item for my personal hobby. It doesn’t neatly fit into the usual categories—internally I’ve just labeled it “Searching”—and standard tools like Google search and Google Lens have come up empty. Here’s what I need from you: • Identify an online source—storefront, marketplace listing, specialist forum, distributor page, or archive—where the exact item can be purchased, ordered, or requested. • Provide a working link (or multiple links if there are options), the item’s exact name as listed, current price, seller contact details, and any regional shipping notes. • Confirm basic availability: in-stock status or realistic lead time. Keep the scope simple: ...
I need a cost-effective developer who can turn small scraping and browser-automation requests into simple, working code at around $2 an hour. Things like logging into a site, collecting fields, exporting them to CSV, or automating a short click-through sequence. write work as you first word in your bid. You can use whichever stack you prefer: • Python with Requests, BeautifulSoup, Selenium, or Playwright • VB.NET with HttpClient, WebBrowser, or Selenium If you’re reliable, comfortable working at this rate, and ready to start right away, I’d love to discuss the first task and get you onboard.
I need a small, reliable script that can load the web-based snake game hosted on and take full control of the snake’s movement. The goal is simple: keep the snake alive as long as possible while maximising the score, all hands-free. Core requirements • Detect game state in real time (current direction, food location, collision risk) • Generate and send the necessary keyboard events to steer the snake intelligently • Run unattended until the round ends, then report the final score in the console or a log file Technical notes – A headless option is welcome, but the bot must also work in visible mode so behaviour can be observed while testing. – JavaScript with Puppeteer or a lightweight Python solution (Selenium + webdriver, Playwright, or direc...
I have a growing set of customer submissions gathered through our online forms. Every entry contains names, email addresses, phone numbers and a few preference fields, and I need all of this information transferred accurately into a clean, well-structured Excel spreadsheet. Your task is straightforward: open each online record, verify that the fields are complete, and then enter the data into the corresponding columns in Excel so I can filter and sort the information easily later. Consistency is key—please follow the exact column order I will share (Name, Email, Phone, City, Preferences, Time-Stamp) and keep original spellings intact. The deliverable is a single Excel file with every customer record captured, free of duplicates and obvious typos. I will run a quick spot-check and...
I want to create an "open source" dictionary of Chinese food and beverages, but first need to build a dataset. The objective of the dictionary is to help foreigners studying the Chinese language gain a better appreciation of food vocabulary. Project plan: 1. Design app scraper. Collect restaurant name, food/beverage name, category, description and picture. Other information, such as price and rating and not important, but can also be collected if available. 2. Write scraping software. Use any programming language or framework you are comfortable with. It must output csv or json and download the picture files. 3. Run a small test of the scraper and send the output to me for verification. A few rows of data will be sufficient. 4. Run the scraper in a Chinese city. The specific cit...
I need a clean, well-structured spreadsheet of real estate agents working in the following markets: • New York • Los Angeles • Houston • Dallas • Miami • Beverly Hills • Orange County, CA For each agent, please collect: 1. Full name 2. Professional email address 3. Company they work for Use only information you can confirm directly on the agent’s or brokerage’s own website; no online directories or social-media scrapes. I’m aiming for an accurate, up-to-date list rather than the largest possible one, so quality control matters. Deliver the data in Excel or Google Sheets, one row per agent, with separate columns for Name, Email, and Company. I’ll quickly check a sample before you proceed with the full bat...
I already have a live Square Online store for my hair-extension brand, but the catalogue is still empty. I need up to 500 SKUs uploaded and a basic feed pushed to Google Merchant Center so the products can appear in Shopping results. Here’s what I expect: • Organise the existing product data: place each item in the correct category, lightly edit photos where needed, and complete short, keyword-rich descriptions. • Use Square’s CSV or bulk-upload tools to add titles, prices, variants and images for all items. • Link the finished catalogue to my Google Merchant Center account and confirm the feed is approved with no critical errors. I’ll supply the raw images and spreadsheets; you refine and import them. Familiarity with Square Online, basic image edit...
Only a single social-media profile link is available. The task is to trace the real-world identity behind it and hand back verified personal details—specifically the person’s full name, location, and at least one working phone number. No interest in employment or education history, just solid contact information I can independently confirm. The profile URL will be shared privately at kickoff. A concise outline of the intended OSINT or cyber-investigation methodology (for example: Maltego, recon-ng, Spiderfoot, custom scraping or data-breach lookups) and an estimated turnaround time will help set expectations. Deliverables • Verified legal name with source references • Current location (city and country; street address if attainable) • Reachable phone num...
We are an apartment rental provider in San Diego, CA USA, specializing in renting monthly to interns. We are looking to hire a database builder specialist that can deploy automated web crawler bot and/or human manual open source from search engines like Google. Must have proven database building abilities, tech tools and techniques to help us build custom databases to target market our services to entities within the market groups which we identify. This so we can contact them for direct marketing campaign purposes. * Pricing - Flat fee up to $250 with performance milestone releases. The first database we want to build is for internship programs. We need to build this database from all the companies and government agencies that offer internships within a 3 mile radius of our target...
I need a reliable automation script that focuses on web-based data extraction from popular social-media sites. The goal is to pull structured information—posts, comments, timestamps, engagement metrics, and any visible public profile details—into a clean, machine-readable format that I can later analyze. Scope and expectations • Build a working, documented script (Python is preferred, but I’m open to other languages if they suit the task better). • Handle login or scrolling logic only when the content absolutely requires it; the process should respect rate limits and site terms. • Deliver extracted data as JSON or CSV, accompanied by a concise README that explains setup, configuration, and how to extend or adjust target accounts/hashtags/pages. &bull...
I’m looking for a lightweight script that lets me paste a single Solana wallet address and instantly tells me whether the wallet looks safe to copy-trade or if it’s more likely being used to dump on followers or drain exit-liquidity. Core analysis logic should combine: • Transaction volume, frequency, and clustering patterns • Age of the wallet and activity bursts • Context from Solana blockchain explorer endpoints, real-time market price feeds, and relevant historical trade data A clear risk score or simple “good vs. flag” verdict is all I need on screen or in the console. Manual input of the wallet ID is fine for now; no CSV or API ingestion is required at this stage. What I expect you to deliver: • Well-commented code ...
An escort agency has created online profiles using my personal photos without my permission. I have clear evidence that the images are mine, but I do not have direct access to the accounts or dashboards where the profiles are hosted. I’m not looking to sue or take formal legal action right now; I simply want the images and any related content taken down as quickly and quietly as possible. I need someone experienced in online content removal, DMCA or similar takedown requests, and reputation-protection workflows. Scope of work • Locate every active profile or page using my photos. • Draft and submit the necessary takedown notices (DMCA, GDPR, privacy, or platform-specific) to the escort site, hosting provider, search engines, and relevant third parties. • Foll...
I’m looking for someone who knows their way around LinkedIn Sales Navigator and can pull together a clean list of 100 veterinarians who are currently working in the Netherlands. Scope • Primary filter: active LinkedIn profiles from clinics in Amsterdam, Rotterdam, Utrecht, The Hague, and also Eindhoven and Groningen. • Data fields I must have: – Name – Clinic name – LinkedIn profile URL (top priority) – Professional email (top priority) – Direct phone number (helpful but secondary) What matters most is that every record comes from an up-to-date profile, with email and the LinkedIn URL verified before you add it to the sheet. Use any enrichment tools you prefer, but the core search must start in LinkedIn Sales Navigat...
I have an Amazon automation bot written entirely in Python, but its speed and reliability have recently declined, and I need a strong, flexible update for it. Some of the data it returns is inaccurate, and the two-step verification process — including the OTP sending mechanism, which is the bot’s core and final task — doesn’t function properly or with a high success rate, causing the entire operation to fail. My goal is to restore the script to peak performance — faster execution, higher data accuracy, and a verification process that succeeds every single time. To achieve this, the code needs performance profiling, refactoring, and improvements in how it handles Amazon’s constantly changing front end. If you’re comfortable optimizing Python perfo...
I need a clean, well-structured spreadsheet of real estate agents working in the following markets: • New York • Los Angeles • Houston • Dallas • Miami • Beverly Hills • Orange County, CA For each agent, please collect: 1. Full name 2. Professional email address 3. Company they work for Use only information you can confirm directly on the agent’s or brokerage’s own website; no online directories or social-media scrapes. I’m aiming for an accurate, up-to-date list rather than the largest possible one, so quality control matters. Deliver the data in Excel or Google Sheets, one row per agent, with separate columns for Name, Email, and Company. I’ll quickly check a sample before you proceed with the full bat...
I need a lean Python script that can post the same classified ad for to every city listed on once a day. The core goal is simple: keep the ad live everywhere, every day, without manual effort. Key functions I expect: • City-by-city posting loop that reads from a flat list or spreadsheet. • Automatic proxy rotation. I’m open to a free list, a paid service, or a custom pool—whichever you feel is the most reliable and easiest to swap out. • Image-based CAPTCHA solving so the process stays fully unattended. • Basic logging so I can track which cities succeeded, failed, or were skipped. A minimal, clean design is all that’s required right now—command-line execution and a straightforward config file for ad text, image, schedule, and proxy s...
preciso de um cod web scraper simples do Futebol virtual, da bet365 → CAMPEONATO MUNDIAL/ EURO/ PREMIERSHIP/ SUPERLIGA. Referências: O scraper a se desenvolver precisa: precisa ser um valor acessível, pois é um projeto particular e não comercial. Quero salvar em um banco de dados próprio, os dados de 24 horas do futebol virtual da Bet365, preciso dos resultados das quatro ligas, sempre armazenado as ultimas 24horas ao vivo conforme são exibidos em tela na bet365, junto das odds dos próximos jogos BASE DE 24HORAS + ODDS DOS PROXIMOS JOGOS (aovivo) ( SOMENTE DO MERCADO DE FUTEBOL VIRTUAL)
I need a clean, ready-to-use Excel spreadsheet of locksmith contacts gathered from these two public directories: • • There are roughly 2,000 listings between the two sites. Please capture every available record, then merge the data so that any entries sharing the same email address are treated as duplicates and removed. Columns required in the final file: Name, Address, Email, Phone Number—nothing more and nothing less. The finished spreadsheet should open smoothly in Excel with consistent formatting and no blank rows or stray characters. A quick scrape with Python (BeautifulSoup, Scrapy, or similar), followed by a solid dedupe pass in Pandas or your preferred toolset, is fine by me as long as the end result is accurate. Deliverable • One Excel (....
I need a concise starter list of brands my agency can approach. The focus is on the “agency” space, so every company you include should either serve agencies or partner closely with them. Deliverable • A simple spreadsheet (Excel or Google Sheets) containing: – Brand name – Website URL – One publicly available contact email or LinkedIn profile for outreach Scope Keep this to an initial batch that can be built quickly—just enough to let me test messaging before scaling up. Accuracy is more important than volume. When you reply, attach one short example from past work that shows you have successfully sourced or researched prospect lists before.
Mission Build a resilient crawling and enrichment pipeline that ingests public web data at scale, enriches it with AI, stores it cleanly, and exposes it through stable APIs and a usable front end. What you’ll do • Design and ship site-specific and configurable crawlers that survive auth flows, pagination, anti-bot, and layout changes. • Choose the right tool per target: headless browser with Playwright or Puppeteer, HTTP clients, or Scrapy-style spiders. • Implement anti-blocking: rotating residential proxies, session and fingerprint management, CAPTCHA solving, backoff and retries. • Extract full HTML and clean text, normalize and deduplicate, track versions, and persist to Postgres or Mongo with S3-style raw dumps. • Build REST or GraphQL endpoints a...
I need a reliable scraper built (or an existing one configured) to pull a complete data set from a public business directory. The target site lists companies across multiple categories and paginated results. Scope • Collect every available business name with its full street address. • Capture any listed contact information—email addresses and phone numbers. • Record the directory’s stated business category or service tags for each listing. Delivery • Provide the compiled results in a clean, de-duplicated Excel spreadsheet with separate columns for each field. • Include a short “read-me” explaining any assumptions, filters, or rate-limiting tactics you used. A reusable script (Python preferred) is welcome but not mandatory as lo...
I need a straightforward spreadsheet that gives me broad national coverage of Australian companies in the Automotive, Electronics, and Food and Beverage sectors. For each company, I only need four columns: • Company name • Industry (Automotive, Electronics, or Food and Beverage) • Website URL • First and last name of the current Managing Director, CEO, or President The goal is 100 k – 150 k unique records drawn from manufacturing, sales, or purchasing-focused businesses. No emails, phone numbers, or social profiles are required—just the names and working websites. Feel free to gather data through public sources such as company sites, LinkedIn, ASIC, or reputable business directories, but be sure the names are spelled correctly and the links resolv...
**Project Overview:** I need a custom automation tool that can quickly access data from busy websites that have restrictions against bots and copy-pasting. **Key Requirements:** - Bypass website traffic and access restrictions - Super fast data retrieval from high-traffic sites - Avoid detection as a bot/automated tool - Handle potential CAPTCHAs and rate limiting - User-friendly interface for non-technical users **Technical Specifications:** - Must work around anti-bot measures - Should use residential proxies or similar undetectable methods - Fast response times even during peak traffic - Support for JavaScript-heavy websites - Data export capabilities (CSV, JSON, or database) **Preferred Technology Stack:** - Python (BeautifulSoup, Selenium, Playwright, Scrapy) - Residential proxy in...
I’m looking for a reliable way to pull rich product information from a single online platform. The focus is strictly on product details, and I need everything bundled so that I can reuse or extend the code later without hassle. Here’s what I expect the scraper to capture for every item it encounters: product name, full description, all available images, and the specifications shown on the page. Accuracy is critical; the data must mirror exactly what the site displays at the time of scraping. I’m comfortable with Python-based solutions that rely on requests, BeautifulSoup, Selenium, Scrapy, or an equivalent stack, provided the final script is well-commented and easy to schedule. Feel free to suggest a different language or library if it offers a clear advantage for this ...
I need a Tampermonkey userscript that visits a specified Shopify storefront, pulls the in-stock quantity for each product variant once a day, and then sends the results to a Google Sheet via an incoming webhook. The sheet should receive one new row per product/variant with date-time, product title, variant, SKU (if available), and current inventory level. Key points you’ll build: • Tampermonkey script that can be installed in Chrome. • Daily trigger—either through the script’s internal timer or by detecting first page load of the day. • Reliable parsing of Shopify JSON embedded in the storefront to capture accurate inventory levels. • POST request to a pre-generated Google Apps Script webhook URL, formatting the payload so the sheet simply appen...
I need a fresh, accurate email database of job-seeking mid-level professionals who currently live in the United States. The list must contain: • Full name • Email address (active, permission-based) • Current job title I’m open to any industry as long as the contacts genuinely fit the mid-level career tier and are actively exploring new opportunities. Data quality is crucial—no outdated roles, generic inboxes, or duplicates. I will run random checks for validity and expect a very low bounce rate. Please outline your data source, verification method (e.g., ZeroBounce, NeverBounce), and approximate record count you can deliver. A clean CSV or Excel file is preferred. If you already have a sample of five records, feel free to share it so I can verify formatti...
I need every publicly visible email address listed on gathered and delivered as a clean, ready-to-use text file. Scope • Crawl all company and profile pages reachable from the URL above. • Extract only valid email addresses; no contact forms or placeholders. • Remove duplicates and obvious role-based spam traps (e.g., info@, webmaster@) only if they repeat—do not filter out legitimate addresses solely because they are generic. • Verify each address syntactically so the file contains no broken strings. Deliverable A single .txt file with one email per line (newline-separated, exactly as requested). Quality I will quickly spot-check 10–15 random addresses; any bounce rate above 10 % or more than five obvious errors will require a revision. Timelin...
Sns platform Data,SEO,AD Analytics → Auto Content(wrting image video)→ Multi-Platforms upload→ Email autosend classify→ Chat & Call shopping aniisis uplaod like Once start I will provide the detailed workflow structure (links, nodes, prompts, etc.) so you can work smoothly
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.