Selenium Jobs
...____ 2. Crawling Logic The crawler must: • Open with the given search parameters • Iterate through all result pages (page 1, page 2, page 3, … until no more listings) • Extract all listings shown on each page • For each listing, open the detail page and extract full information Important: • uses client-side rendering / JavaScript • A browser-based solution (Playwright, Selenium, or equivalent) is required • Simple requests + BeautifulSoup solutions are not sufficient ________________________________________ 3. Data to Extract (Minimum) For each listing, extract as many of the following as available: • Listing ID (if available) • Title • Price • Rooms • Living space (sqm) • Address / location • ZIP code...
...interaktiven Google Sheets oder einer Datenbank für Spielerdaten und Mannschaftsverwaltung Benutzeroberfläche und Interaktivität: Möglichkeit, Daten manuell zu ergänzen oder zu korrigieren Übersichtliche Reports und Statusmeldungen zu Spielen und Spielern Sicherer Umgang mit Fehlermeldungen und Logging für Analysezwecke Technische Anforderungen: Python (inkl. Web-Scraping mit Requests, BeautifulSoup, Selenium) Umgang mit Web-APIs und Datenformaten (JSON, CSV) Erfahrung mit Datenbanken oder Google Sheets API Strukturierung großer Datensätze und effizientes Caching Optional: Web-Frontend oder GUI für Benutzerinteraktion Ich suche einen Entwickler, der mir hilft, die bestehenden Funktionen zu optimieren, neue Datenquellen zu integ...
...vertraut ist mit Selenium und der Automatisierung von Browser-Interaktionen. Ziel ist es, eine Webanwendung zu automatisieren, bei der mehrere Positionen auf einer Seite bearbeitet und anschließend gespeichert werden müssen. Es müssen keine Änderungen an den Daten vorgenommen werden, sondern nur die Bearbeitungs- und Speicheraktionen für jede Position durchgeführt werden. Aufgaben: Implementierung eines Skripts zur Automatisierung von Webaktionen mithilfe von Selenium. Automatisierung von Maus-Hover-Aktionen, Klicken auf Bearbeitungs-Buttons und Speichern von Formularen. Sicherstellen, dass das Skript stabil läuft und auf allen relevanten Positionen auf der Seite angewendet werden kann. Optional: Logging und Fehlerüberwachung inte...
...Berichterstattung über Fehler und Unregelmäßigkeiten -Identifizierung von Möglichkeiten zur kontinuierlichen Verbesserung der Testautomatisierung und des Testprozesses Anforderungen -Gute Deutschkenntnisse in Wort und Schrift -Bachelor-Abschluss in Informatik, Softwaretechnik oder einem verwandten Bereich -Fundierte Kenntnisse in der Testautomatisierung und Erfahrung mit entsprechenden Tools (z. B. Selenium, Appium, JUnit, TestNG, Jenkins, etc.) -Gute Kenntnisse in der Programmierung (z. B. Java, Python, C#) für die Entwicklung von Testskripten -Verständnis für Software-Testmethoden -Prozesse und Best Practices -Erfahrung mit agilen Entwicklungsmethoden und kontinuierlicher Integration -Ausgezeichnete analytische und Problemlösungsfä...
...Berichterstattung über Fehler und Unregelmäßigkeiten -Identifizierung von Möglichkeiten zur kontinuierlichen Verbesserung der Testautomatisierung und des Testprozesses Anforderungen: -Gute Deutschkenntnisse in Wort und Schrift -Bachelor-Abschluss in Informatik, Softwaretechnik oder einem verwandten Bereich Fundierte Kenntnisse in der Testautomatisierung und Erfahrung mit entsprechenden Tools (z. B. Selenium, Appium, JUnit, TestNG, Jenkins, etc.) -Gute Kenntnisse in der Programmierung (z. B. Java, Python, C#) für die Entwicklung von Testskripten Verständnis für Software-Testmethoden -Prozesse und Best Practices -Erfahrung mit agilen Entwicklungsmethoden und kontinuierlicher Integration Ausgezeichnete analytische und Problemlösungsfäh...
Hallo Zusammen, habe ein konkretes Problem beim Websraping. Habe im Netz leider keine Lösung gefunden. Ich suche nun jemanden der mir bei dem Problem helfen kann und bereit dafür zu vergüten. Es handelt sich vermutlich nur um 1-3 Zeilen Code. Das restliche Script habe Ich schon selbst geschrieben Habe einen Thread by stackoverflow dazu: Bitte melden wenn jemand einen Lösungsansatz hat und sich mit der Problematik gut auskennt. Würde gerne einen Festpreis zahlen, wenn das Problem gelöst werden kann. Sie können gerne Ihre Preisvorstellung mitteilen. Ich stelle mir ein kurzes Teams Meeting vor, wo wir zusammen das Problem beheben. LG Dennis
IntraFind Software AG entwickelt Produkte im Bereich Insight Engine & Content Analytics. Suchen & Finden sowie moderne (Text-) Analyseverfahren - dafür brennen wir! Um unser moderen Web-Anwendungen kundenspezifisch weiterzuentwickeln suchen wir einen weiteren Front-End Entwickler mit sehr guten Kenntnissen in JavaScript und vorzugsweise React. Tech-Stack •...moderen Web-Anwendungen kundenspezifisch weiterzuentwickeln suchen wir einen weiteren Front-End Entwickler mit sehr guten Kenntnissen in JavaScript und vorzugsweise React. Tech-Stack • JavaScript / React / Redux • zur Virtualisierung von Graphdatenbanken • Jest und Sass (Bootstrap 3/4) • JSON, YAML, HTML, CSS • NPM, Babel, Webpack • Docker / Maven / Jenkins / GitLab pipelines &bull...
Do ponizszych wymagan nalezy dodac znajomosc j. niemieckiego pozwalajacego na prace w IT oraz gotowosc do wyjazdow do Niemiec (w okresie poczatkowym sa to czestsze wyjazdy, pozniej rzadsze): Als Unterstützung für die Migration von Testfällen wird ein Junior-Entwickler benötigt. Im Detail: die Swing-Testfälle (Jubula) sollen migriert werden für den WebClient (Selenium). Hierfür gibt es bereits einen Automatismus. Dieser muss jedoch für weitere Fachbereiche ausgebaut werden. Hinzukommt kommt die Wartung der bestehenden Umgebung und die Vor-Analyse der Testergebnisse. Erforderliche Kenntnisse und Fähigkeiten: o ein Studium der Informatik oder eine IT-bezogene Berufsausbildung o Berufserfahrung im Bereich Softwa...
Für einen Kunden aus dem Raum Stuttgart, suche ich momentan einen Entwickler ...zu Hause aus entwickeln und werden in den unterschiedlichsten Projekten nach Bedarf eingesetzt. Sie bringen folgendes Profil mit? Dann freue ich mich über Ihren aktuellen Lebenslauf: Das Profi des gesuchten Webentwicklers (m/w): • Expertise in , JavaScript, Typo3 • Gute Kenntnisse in HTML/ CSS • Kenntnisse in BW WebServices, Java wünschenswert • Erfahrung im Umgang mit Testing Tools Selenium, Jasmine von Vorteil Die Rahmendaten des Projekts: • Start: Sofort / zwingend erforderlich • Standort: 100% Remote – max. 2-3 Tage vor Ort für die Einarbeitung • Auslastung: Teilzeitprojekt • Projektdauer: 6 Monate, mit Verlängerungsoption &...
Hallo, wir suchen einen erfahrenen Entwickler der sich mit Test Automatisierung auf Selenium Basis auskennt und mindestens eine der oben genannten Programmiersprachen beherrscht. Du solltest dich ebenfalls mit Selenium gut auskennen. Du müsstest 2-3 mal pro Woche bei uns in Hamburg vor Ort sein. Ab Januar kann die Aufgabe 100% Remote ausgeführt werden. Tagessatz 500€ plus 170€ pro Tag für deinen Aufenthalt in Hamburg.
Windows Java Hallo, ich suche jemanden der mir erklärt per praktischen Beispielen was ein Funktionaler Test, nicht funktionaler Test, Integration Test und Regressiontest ist und wie ich den programmieren kann in java mit selenium?
Ich habe laufende Arbeiten, die sich auf unser letztes Projekt beziehen '(Web) Browser Automation (einfach!!!) programieren mit Watir oder Selenium oder Ihre EIGENE Umsetzung'
Ich suche einen zuverlässigen Programmierer der (mit Watir oder Selenium - oder per EIGENER Lösung) für den Firefox Browser ein EINFACHES Script schreiben kann welches für die web-basierte Version (!!!) von Spotify gedacht ist. Es geht eigentlich nur um 2 bis maximal 3 Schritte die per Script automatisiert werden müssen. Das kann/sollte für einen erfahrenen Programmierer wirklich relativ schnell und einfach gehen - wenn man mit WATIR oder SELENIUM vertraut ist. Infos dazu unter: oder Mir ist es wichtig, dass man mir bei der Installation und Inbetriebnahme hilft - zahle da GERNE extra für diesen Support! Wenn das Script steht müsste der Programmierer mit helfen das lauffähig auf meinem Notebook zu installieren. ALT...
browserstack.....................................................
... - Microsoft Office (Excel, VBA, Word, Access, PowerPoint) - SQL Databanken
Selenium automation script: Selenium automation script: Selenium automation script: Selenium automation script: Selenium automation script: Selenium automation script: Selenium automation script:
Selenium automation script: Selenium automation script: Selenium automation script: Selenium automation script: Selenium automation script: Selenium automation script: Selenium automation script:
I need a straightforward Python script that signs in to a Hotmail / account with Selenium (or another reliable browser-automation library)
I need end-to-end testing for my web application. The project code is already written, so you will be focusing solely on testing. Requirements: - Thoroughly test on Chrome - Ensure all workflows function as intended - Identify and document any bugs or issues Ideal Skills: - Experience with end-to-end testing tools (e.g., Selenium, Cypress) - Strong attention to detail - Familiarity with web applications and browser testing Please provide a brief overview of your testing experience and any relevant tools you plan to use.
I have a public-facing website that I need scraped end-to-end. The site is open (no login), but the content is split across multiple pages, so your script will have to detect and follow pagination automatically. Here is exactly what I expect: • A clean, well-commented Python script (requests/BeautifulSoup, Scrapy, or Selenium—your choice) that visits every page, captures the required fields, and writes them to a neatly structured CSV. • The final CSV containing all rows pulled from the site. • A short README that tells me how to run the script and change the target URL or output path if needed. Code quality matters to me: no hard-coded absolute paths, clear variable names, and graceful error handling so the run doesn’t stop if a single page fa...
I...framework into a CI pipeline. • Walk-throughs of flaky-test triage, mocking external dependencies, and debugging failures that only show up in complex environments. • Short take-home exercises or sample repositories that reinforce each lesson, plus code reviews so I know I’m applying the patterns correctly. Although my main interest is integration testing, I’m flexible on specific tooling: Selenium, NUnit, SpecFlow or whichever stack you feel showcases best practices in C# automation. The important part is understanding why a tool is chosen and how to extend or swap it later. Please outline your preferred tools, how you normally structure a learning path, and roughly how many hours you expect we’ll need to reach a self-sufficient framework and...
...support release cycles. • Maintain test documentation and contribute to continuous improvement of QA processes. Required Skills & Qualifications Core QA Skills • 3+ years of experience in software testing, preferably in cybersecurity or networking domains. • Strong understanding of QA methodologies: black-box, white-box, and grey-box testing. • Experience with test automation frameworks (e.g., Selenium, PyTest, Postman). • Familiarity with reverse proxy tools (NGINX, HAProxy, Envoy) and network traffic analysis. • Hands-on experience with Linux environments and shell scripting. • Graduate from a Top 50 engineering college in India as per NIRF 2025 ranking. Cloud & Security Awareness • Testing in cloud platforms (AWS, Azure, GCP) ...
...implement a robust test automation architecture with Maven/Gradle and CI/CD pipelines Page Object Model (POM) or similar design pattern reporting (Allure/Extent Reports or similar) reusable and maintainable test scripts documentation for setup and usage Required Skills: experience in Playwright with Java understanding of Selenium/automation concepts with TestNG/JUnit in CI/CD integration (Jenkins/GitHub Actions, etc.) of Git version control in framework design and best practices Nice to Have: automation experience knowledge testing exposure Duration: Short-term (with possible extension) Share your previous Playwright (Java) project
...CSV should include original variables like organization name, state and zip even though that data was not used in the scraper. The script must perform the following steps for each URL in the input list: 1. Input: Read a list of URLs from a provided CSV file (single column of URLs). 2. Navigation/Rendering: Visit the URL (handling redirects is essential). The use of a headless browser (like Selenium/Puppeteer) or an advanced HTTP library is preferred, as some websites may load the footer content dynamically via JavaScript. 3. Targeted Scanning: Scan the HTML source code of all pages found in the sitemap, specifically looking for the presence of a specific link. 4. Output Logic: - If the link is found, record the identified vendor. - If no vendor is explicitly identified, ...
I need the entire history of a specific Facebook Group captured—every post along with all associated comments. I’m ...with working links to the images and videos placed in clearly named folders. I don't want folders or links. Just one huge continuous page that has everything. This is for a court case and I have to give this to the other side. I want them to have to scroll through however many hundred pages there are. Just as if they were actually on FB. Please outline: • your scraping approach (Python + Selenium, Go, node-puppeteer, etc.), • how you’ll handle media downloads and folder structure, • estimated turnaround time. I’ll review a short sample export before we proceed with the full run to confirm the layout meets my ...
...need a Selenium-based solution that runs reliably on Windows and opens Google Chrome to simulate human visits to LinkedIn (and occasionally other) profile URLs listed in a Google Sheet. For each URL the program should: • Pull the next unused link from the sheet • Load the page in Chrome, wait a random time between 20 seconds and 3 minutes • Apply truly randomized scrolling patterns while the profile is open so behaviour looks organic • Fire a webhook the moment the visit completes, passing back any ID or payload I define so our CRM reflects the touch instantly Configuration items such as Google Sheet ID, webhook endpoint, minimum/maximum dwell time, and daily visit caps should live in a simple file I can edit without touching code. A short README on ...
...the comment text, number of comments, likes, reposts/shares, the post date and any other readily available metadata (author handle, follower count, post URL, media links, etc.). Accuracy is critical because the data will feed a trend-analysis dashboard later. Please build the workflow in a way that respects rate limits and login requirements: if you intend to use official APIs, private APIs, Selenium, Scrapy, Playwright, or headless browsers, spell that out so I know how sustainable the solution will be. The final hand-off should include: • A clean, well-commented reusable script (Python preferred) • A short README explaining environment setup, keyword input format and how to extend to new regions • The full export in CSV so I can validate before sign-off I...
... 1. Runs on Windows 10/11 without additional setup beyond the usual runtime (e.g., Python + packaged dependencies, .NET, or a single-file executable). 2. Completes the full login-scrape-submit-click cycle unattended. 3. Automatically resolves CAPTCHAs within a reasonable timeout (configurable). 4. Produces the requested data file and a concise activity log per run. Preferred tooling: Selenium, Playwright, or Puppeteer, though I am open to any framework that meets the above goals....
I need a r...I can decide the exact days and times each role goes out • one-click “auto post” that instantly publishes a job when I hit the button The posting frequency isn’t fixed; some days I may blast several openings, other weeks none at all, so the scheduler has to respect whatever plan I set. Use whichever method makes the process most stable—headless browser automation (Puppeteer, Playwright, Selenium) or direct API calls if Placementindia offers them. The final package must run on a standard VPS, expose clear logs/errors, and allow easy editing of job templates. Please share links or short videos of similar bots you’ve built so I can gauge robustness. Once I can log in, queue a job, and watch it appear live without intervention, I&rs...
...Primitiva: €1.00 El Gordo: €1.50 Dynamic AI suggestions: recommend bet quantity based on backtesting, current jackpot and balance, using Monte Carlo for EV (prioritize positive EV scenarios). 2.5. Login and Bet Execution Automation Never store login/password (not even encrypted). Prompt user for credentials (email/NIF/NIE + password) every login, using secure masked fields. Login process: Selenium (headless Chrome/Firefox), access , fill form, submit, verify success (post-login elements), handle sessions/cookies. Bet execution: navigate to lottery page, generate AI numbers, add exact user-requested bets to cart (batches for high volumes), support rules of each lottery. Finalization: show full summary (lottery, bets, cost, numbers); require
I need a small, Windows-friendly Python script that will open a real browser with Selenium and wipe large batches of content from my X (Twitter), Facebook, and Instagram accounts. Because my X account sits on the free API tier I keep running into 403 errors, so this project must rely solely on browser automation—no official APIs or paid third-party tools. Here’s what I’m after: the script launches from the command prompt, asks for (or reads from a .env) my login credentials, signs in, and then iterates through all visible posts, tweets, and reels, deleting each one until none remain or until it hits an optional stop condition such as a date or a post count I can set. A simple console printout like “Deleted tweet #42” is enough for logging; I don’...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK. I NEED THIS SCRIPT TO BE FIX + PAGINATION TO FETCH AROUND 2400 RECORDS FOR YELLOWPAGES I ONLY USE JUPYTER
...market research. The job centers on extracting selected data points from public web pages, transforming them into a clean, structured format, and making them available for analysis every 24 hours. Here’s what I need you to handle from end to end: • Source acquisition – fetch HTML from the URLs I provide, even when content is hidden behind JavaScript (a headless browser such as Playwright or Selenium is fine). • Parsing & cleansing – pull the specific fields I’ll list (product name, price, SKU, availability, and a time-stamp), remove duplicates, and standardize values. • Storage & delivery – load the daily output into my PostgreSQL instance; if you prefer Parquet or plain CSV that’s acceptable as long as it’s a...
...information should be organised into a clean CSV file—one row per page—with columns for page URL, full body text, image file names, and link destinations. Please download the images themselves as well and bundle them in a separate folder (a simple ZIP is fine); the CSV should reference the exact filenames so everything lines up. I’m happy for you to use Python with BeautifulSoup, Scrapy, Selenium or whichever stack you prefer, as long as the final output meets these acceptance criteria: • Complete CSV containing text, image names, and link URLs for each page • All images successfully downloaded and accessible via the filenames listed in the CSV • No duplicates or missing pages from the target site * Images need to be sorted for each l...
I am looking for a Python developer to create a simple and focused scraper script for Facebook Mar...• A file containing all product URLs for that seller • File format: TXT or CSV • Handle infinite scrolling to load all products Technical Requirements: • Python • Selenium or Playwright • Experience with dynamic websites • Clean, runnable, and well-structured code Important Notes: • No filters required (no country, city, or keywords) • No data is needed other than product links only • Manual login can be used if required Budget: Open — to be discussed based on experience and quality When Applying, Please Include: • Any previous experience with Facebook Marketplace scraping • The tool you plan to use (...
I have a data-analysis pipeline that relies on a steady flow o... • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Python—Scrapy, Selenium, Playwright, or a headless solution of your choice—as long as it respects the site’s anti-bot measures and keeps requests polite. Please include a brief outline of how you’ll handle pagination, lazy-loaded images, and rate limiting. Let me know your proposed stac...
...mandatory declarations and any digital signature steps must all be handled by the system before it attempts the final submission. A short, clear dashboard that shows “ready”, “errors found” or “submitted” status for each tender would be ideal so I can intervene only when something is missing. Deliverables I must see to accept the project: • Source code and install guide (preferably Python with Selenium / Playwright or similar RPA layer, but I’m open if you can justify another stack) • A configuration file where I can add new government portals without touching core code • Automated validation rules that stop a submission if any mandatory field or attachment is missing • A post-submission PDF/CSV log summarisin...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK.
...accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull values • calculated deltas between the two Please build in a simple scheduler or CLI flag so I can trigger each scrape automatically via cron. Bet365 is heavily scripted, so headless-browser handling (Selenium, Playwright, or Puppeteer) plus proxy/anti-bot measures may be required; use whichever stack you’re comfortable with, provided it is well documented. Deliverables 1. Clean, runnable source code with setup instructions. 2. SQL schema and migration script. 3. README showing sample queries that compare initial vs. pre-game lines. 4. Brief note on ho...
I need a small automation script that periodically checks item availability on the Bigbasket website and pings me on Telegram the moment any of the tracked products come back in stock. You are free to choose the underlying tech stack (Python + Requests/BeautifulSoup, Selenium, Playwright, or a headless browser of your choice) as long as it works reliably with Bigbasket’s current site layout and protects my account from rate-limit blocks or captchas. The flow I have in mind is straightforward: I feed the bot a list of product URLs (or SKUs). It runs on a schedule I can change—every few minutes during peak shortages, maybe every hour otherwise—grabs the stock status, and fires a concise Telegram message whenever the status flips from “Out of Stock” to &l...
...workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • no pages on the site were skipped • the sheet opens flawlessly in the latest des...
Quiero contar con un archivo .xlsx que contenga las 12 728 filas completas de la tabla pública que aparece en la web de INDECOPI (Perú). El sitio sólo muestra 10 regis...formato estándar, sin filtros, tablas dinámicas ni otras funcionalidades añadidas. Yo te facilitaré la URL exacta y los pasos de navegación para que ubiques la vista paginada. Una vez terminado, comprobaré que el total de filas coincida con el contador oficial y que no existan celdas vacías en los tres campos solicitados. Si tienes experiencia en scraping con Python (requests, BeautifulSoup, Selenium) o herramientas similares y puedes generar el .xlsx sin alterar la estructura original, me será suficiente. Entrega prevista: archivo Excel ...
...accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull values • calculated deltas between the two Please build in a simple scheduler or CLI flag so I can trigger each scrape automatically via cron. Bet365 is heavily scripted, so headless-browser handling (Selenium, Playwright, or Puppeteer) plus proxy/anti-bot measures may be required; use whichever stack you’re comfortable with, provided it is well documented. Deliverables 1. Clean, runnable source code with setup instructions. 2. SQL schema and migration script. 3. README showing sample queries that compare initial vs. pre-game lines. 4. Brief note on ho...
...service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Selenium, Scrapy, or a hybrid approach — as long as the final solution is headless, resilient to minor layout changes, and respectful of Iberia’s rate limits. Only flights that are bookable with Avios need to be captured; no hotel or car-rental data is required. Deliverables • Clean, modular Python code (FastAPI or Flask preferred,...
...service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Selenium, Scrapy, or a hybrid approach — as long as the final solution is headless, resilient to minor layout changes, and respectful of Iberia’s rate limits. Only flights that are bookable with Avios need to be captured; no hotel or car-rental data is required. Deliverables • Clean, modular Python code (FastAPI or Flask preferred,...
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce...
...as an appointment is secured (or fails), the system should push an email and, if possible, a Telegram or SMS alert to our team. Access & roles Only Admins—our internal staff—will use the interface. A straightforward dashboard that lists upcoming bookings, status messages and basic logs is enough; no public client portal is required at this stage. Tech preferences I am open to Python (Selenium, Playwright), Node, or another proven stack you recommend, as long as it can handle VFS’s anti-bot measures and is easy for us to maintain on our own server in Turkey. Deliverables 1. Source code and deployment guide for the web automation. 2. Admin dashboard with real-time booking log and resend-notification option. 3. One live demo showing the tool gr...
...visit the target site (I’ll share the URL once we start) and pull Product Details exactly as they appear online. That means every time I point the script at a category or search page it should work through all pagination, capture the data, and save it to CSV or Excel so I can sort and analyse it later. Key points to cover • Use reliable, open-source libraries such as requests, BeautifulSoup, or Selenium—whichever gives the most stable results for the site once you see it. • Build in simple settings (URL, output file name, optional delay between requests) near the top of the file so I can tweak them without touching the core logic. • Handle common edge cases: missing fields, changing layouts, or temporary time-outs, and log any skipped items for r...
...supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. Script written in Python 3, primary parser BS4, with clear README on setup and dependencies (requests, selenium for dynamic pages if needed, etc.). 2. One-click (or single command) launch that completes without errors and produces a sample file from a test URL I provide. 3. Simple logging that flags failed pages or blocked requests so I can retry. 4. Code remains within Lazada’s permissible request rate to minimize captchas or bans. Deliver the .py file(s), , , and a sh...
...developer to build a robust scraper that collects the required data and writes it straight to JSON—no additional cleaning or processing necessary. Once we begin I’ll provide the target URL(s) and any access details; for now, assume a standard public site with pagination and occasional anti-bot checks. Core expectations • Written in Python 3 using requests/BeautifulSoup or Scrapy; resort to Selenium only if there’s no lighter workaround. • Handles pagination, retries, and polite delays gracefully so the run can complete unattended. • Config file or clear constants for headers, cookies, and start URLs, letting me tweak targets without editing core logic. • Produces a single JSON file (or one file per page if that’s cleaner) refle...