
Geschlossen
Veröffentlicht
Bezahlt bei Lieferung
I have 10 Databricks pipeline YAML files and 9 associated Python notebooks that all need the same set of edits. Every change is straightforward yet must be applied with absolute consistency. YAML adjustments required • Insert a new task at the very top of each tasks section. I will supply the exact naming convention so you can drop it in without guesswork. • Add a dependency from the existing main task to this newly inserted task. • Modify the log-handling task so its failure condition points to the new logic, and change the hard-coded error description to a dynamic value. Notebook adjustments required • Before any exception is raised, add one line that captures the error message and stores it for downstream logging. Reference files that show the pattern for each edit will be in the repository; simply mirror what you see, commit, and push. Git history must stay clean (one commit per file group is fine), and unit tests or pipeline validations should still pass after your changes. Acceptance criteria 1. All 10 YAML files build and deploy in Databricks without warnings. 2. All nine notebooks run end-to-end; captured error messages appear in the designated storage location. 3. New tasks and dependencies adhere exactly to the naming standards I provide. 4. No other lines are changed outside the scope above. You’ll need solid Databricks workflow experience, comfort with YAML syntax, and enough Python familiarity to tweak notebook code quickly. The work is repetitive yet detail-oriented, so accuracy is more important than speed.
Projekt-ID: 40293047
6 Vorschläge
Remote Projekt
Aktiv vor 27 Tagen
Legen Sie Ihr Budget und Ihren Zeitrahmen fest
Für Ihre Arbeit bezahlt werden
Skizzieren Sie Ihren Vorschlag
Sie können sich kostenlos anmelden und auf Aufträge bieten
6 Freelancer bieten im Durchschnitt ₹1.222 INR für diesen Auftrag

Hi! Here is the blueprint. Apply consistent edits across 10 Databricks YAML pipelines and 9 Python notebooks. Insert the new task at the top, update dependencies, and adjust log-handling logic in the YAML files. In notebooks, capture error messages before exceptions for downstream logging. Follow the provided reference patterns precisely, commit cleanly, and ensure all pipelines deploy without warnings. Maintain unit tests and validations, with no unrelated code changes. Accuracy and consistency are key. Should I implement the notebook error-capture exactly as shown in the reference files, or is a more generalized logging approach acceptable across all notebooks? Estimated time: 5–7 days. Ready to start as soon as repository access and naming conventions are provided.
₹1.500 INR in 7 Tagen
3,3
3,3

I see you need precise and consistent updates across 10 Databricks pipeline YAML files and 9 Python notebooks, ensuring your tasks and dependencies follow strict naming conventions with no scope creep. Your emphasis on accuracy and maintaining clean Git history really stands out. Your project requires inserting a new task atop each YAML tasks section, linking it as a dependency from the main task, and modifying the log-handling task’s failure condition and error description dynamically. For notebooks, capturing error messages before exceptions for downstream logging is key, all while keeping unit tests and pipeline validations intact. I recently completed a similar Databricks workflow update where I added new pipeline tasks and dependencies in YAML, synchronized Python notebook error handling, and maintained a clean Git commit structure. This hands-on experience with Databricks YAML and Python notebook integration will help me replicate your patterns exactly, ensuring all builds and runs succeed without warnings or errors. I can deliver these updates within 4 days, allowing careful testing to meet your acceptance criteria. Let’s discuss any specifics you want clarified before I start.
₹660 INR in 7 Tagen
2,5
2,5

Your Databricks pipeline batch update is exactly the kind of systematic automation I handle regularly. I'll clone your repo, apply the YAML task insertions and dependency mappings, then update all nine notebooks with the error capture logic you've outlined. Clean git commits, following your reference patterns precisely. Built similar batch processing systems including a price aggregation engine that manages 800+ product feeds with consistent data transformations across multiple pipelines. Also developed automated content systems that handle repetitive updates across multiple sites while maintaining strict consistency. You can see my automation work at ffulb.com. Available to start immediately and can deliver within 2-3 days. The repetitive nature means I can move quickly once I understand your naming conventions and reference patterns.
₹922 INR in 2 Tagen
0,0
0,0

Repetitive batch edits across YAML and Python files — this is straightforward. I'll mirror your reference files exactly, keep git history clean with one commit per file group, and verify all pipelines build without warnings. I work with Python and YAML daily and have experience with CI/CD pipelines at Amazon. All 10 YAMLs and 9 notebooks updated and tested within 48 hours. Can start as soon as you share repo access.
₹1.250 INR in 2 Tagen
0,0
0,0

Hyderabad, India
Zahlungsmethode verifiziert
Mitglied seit Sept. 10, 2022
₹600-1500 INR
₹600-1500 INR
₹600-1500 INR
₹600-1500 INR
₹750-1250 INR / Stunde
$8-15 USD / Stunde
$25-50 USD / Stunde
$250-750 USD
₹1500-12500 INR
₹750-1250 INR / Stunde
₹750-1250 INR / Stunde
₹2500000-5000000 INR
$3000-5000 AUD
$250-750 USD
₹600-1500 INR
£250-750 GBP
$250-750 USD
$200-500 USD
₹1500-12500 INR
₹250000-500000 INR
₹75000-150000 INR
₹1500-12500 INR