Używamy plików cookie, aby poprawić wrażenia użytkownika, analizować ruch i wyświetlać odpowiednie reklamy.
Więcej Akceptuję
Podaj pozycję

Przegląd Statystyki wynagrodzenia zawód "Big Data Integration Developer w Polsce"

Otrzymuj informacje statystyczne pocztą

Przegląd Statystyki wynagrodzenia zawód "Big Data Integration Developer w Polsce"

13 250 zł Średnia miesięczna pensja

Średni poziom płac w ciągu ostatnich 12 miesięcy: "Big Data Integration Developer w Polsce"

Waluty: PLN USD Rok: 2024
Wykres słupkowy pokazuje zmiany w poziomie przeciętnego wynagrodzenia zawodu Big Data Integration Developer w Polsce.

Podział miejsc pracy "Big Data Integration Developer" na obszarach Polsce

Waluty: PLN
Jak widać na wykresie, w Polsce największą ilość wakatów zawodu Big Data Integration Developer otwarcie w Dolnośląskie. Na drugim miejscu - Małopolskie, a na trzecim - Mazowieckie.

Polecane oferty pracy

Junior Integration Developer | Cloud&Engineering
Deloitte, Gdańsk, Łódź, Poznań, Warszawa, Wrocław
Kogo szukamy Znajomość minimum jednego z wybranych języków programowania (Python, Java, Groovy, Node.JS, .Net Core, Visual Basic – VBS/VB.NET) Podstawowa znajomość językow znaczników/zapytań i formatów danych: XML, YAML, JSON, WSDL Podstawowa znajomość metod modelowania architektury rozwiązań w oparciu o REST Znajomość jednego z systemów kontroli wersji (preferowany: GIT) znajomość języka SQL i podstaw modelowania danych Umiejętności analityczne Samodzielność i odpowiedzialność. Komunikatywność oraz umiejętność pracy w zespole. Student trzeciego roku studiów licencjackich/inżynierskich i magisterskich oraz absolwenci studiów informatycznychNice to have:Podstawowa znajomość notacji UML Praktyczna znajomość platfomy integracyjnej Anypoint Mulesoft (lub innej, np. TIBCO, WSO2, Boomi)Zrozumienie metodologii Agile, Waterfall Podstawowa wiedza nt. systemów rozproszonych i rozwiązań chmurowych (AWS, Azure, Google i innych) Twoja przyszła rola Wytwarzanie dokumentacji projektowej Czynny udział we wdrożeniach systemów klasy middleware Projektowanie oraz implementacja rozwiązań integracyjnych opartych o API Wsparcie zespołow testerskich w analizie i rozwiązywaniu defektów. Wytwarzanie dokumentacji projektowej To oferujemy W krótkim czasie zdobędziesz kompetencje oraz możliwość certyfikacji, poszukiwane obecnie na rynku pracy pod okiem doświadczonych ekspertów pracujących dla Mulesoft Partner of the Year Poznasz pracę Architekta oraz Developera integracji od podszewki Zobaczysz, jakie są najnowsze trendy i wyzwania w obszarze Integracji Systemów Ścieżka rekrutacji Proces rekrutacji wygląda następująco:1. Wyślij swoje CV! Nasz rekruter zadzwoni do Ciebie, by przeprowadzić rozmowę wstępną (około 30 minut)2. Rozmowa Techniczna (60 minut)3. Finalna rozmowa (45 minut) O Deloitte Deloitte to różnorodność ludzi, doświadczeń, branż i usług, w których je realizujemy - w 150 krajach na świecie. To wyzwania intelektualne, dobry start zawodowy, możliwości ciągłego rozwoju i zebrania cennych życiowych doświadczeń. Musisz zrobić pierwszy krok - postawić kropkę na końcu wysyłanego CV, a potem podpisywanej umowy o pracę. Deloitte to po prostu dobry wybór. I kropka. O zespole Cloud&Engineering Pomagamy naszym klientom uporać się z przejściem z rozwiązań hostowanych w własnym datacenter do rozwiązań opartych o chmurę obliczeniową. Doradzamy w zakresie chmury obliczeniowej w szczególności w zakresie, transformacji organizacji do modelu cloud, migracji systemów do chmury, optymalizacji wykorzystania chmury i rozwoju aplikacji cloud native.Cloud & Engineering | DeloitteCloud Engineering (deloitte.com) #LI-DNI
Senior Big Data Engineer
Grid Dynamics Poland Sp. z o.o., Gdańsk, pomorskie
We are looking for experienced Big Data Engineer specializing in Scala, Hadoop, and Spark within a large enterprise digital product company. The role involves a mix of technical proficiency, strategic thinking, collaboration, and proactive problem-solving to deliver robust and scalable data solutions.
Senior Big Data Engineer
Grid Dynamics Poland Sp. z o.o., Warszawa, mazowieckie
We are looking for experienced Big Data Engineer specializing in Scala, Hadoop, and Spark within a large enterprise digital product company. The role involves a mix of technical proficiency, strategic thinking, collaboration, and proactive problem-solving to deliver robust and scalable data solutions.
Senior Big Data Engineer
Grid Dynamics Poland Sp. z o.o., Wrocław, dolnośląskie
We are looking for experienced Big Data Engineer specializing in Scala, Hadoop, and Spark within a large enterprise digital product company. The role involves a mix of technical proficiency, strategic thinking, collaboration, and proactive problem-solving to deliver robust and scalable data solutions.
Senior Big Data Engineer
Grid Dynamics Poland Sp. z o.o., Kraków, małopolskie
We are looking for experienced Big Data Engineer specializing in Scala, Hadoop, and Spark within a large enterprise digital product company. The role involves a mix of technical proficiency, strategic thinking, collaboration, and proactive problem-solving to deliver robust and scalable data solutions.
Integration Developer for AI Inventory
Hays, Kraków
For our Client (banking industry) we are looking for experienced Integration Developer (Artificial Intelligence).Location: hybrid model from Cracow or WroclawForm of cooperation: B2B contract via HaysRate: 160-220 pln/h net + vat (depends on experience, negotiable)Team: The Applied AI and ML Team –they provide solutions to help the company embrace Artificial Intelligence and Machine Learning. They work with the divisions and functions of the company to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities.Requirements:“Must have” skills:Python, Pandas, FAST API – minimum 5 years of experienceSQL including creating tables and stored proceduresDevops including GitlabExperience in AI/ML projects“Nice to have” skills:Java, Javascript, TypescriptMicrosoft Purview, Datahub or another data catalog toolData Modelling using Enterprise Architect and/or SHACLDatabricks, Dataiku or Azure Data FactoryMlflowData Science toolklits and libraries such as Kedro, Ray, Rill, DuckDb BackstageResponsibilities:Design, Build and Configure the model inventoryIntegrate the inventory with other tools and data platforms in the firm – these range from standard products such as mlflow, Databricks Unity Catalog through to custom inhouse solutionsWe work in an agile methodology and create appropriate design artifacts as per our inhouse methodologyWe offer:Possibility of remote workWorking in big-scale, international projectMultisport card and Private Healthcare
Integration Developer for AI Inventory
HAYS, Kraków, malopolskie
Integration Developer for AI InventoryKrakówNR REF.: 1186231For our Client (banking industry) we are looking for experienced Integration Developer (Artificial Intelligence).Location: hybrid model from Cracow or WroclawForm of cooperation: B2B contract via HaysRate: 160-220 pln/h net + vat (depends on experience, negotiable)Team: The Applied AI and ML Team –they provide solutions to help the company embrace Artificial Intelligence and Machine Learning. They work with the divisions and functions of the company to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities.Requirements:“Must have” skills:Python, Pandas, FAST API – minimum 5 years of experienceSQL including creating tables and stored proceduresDevops including GitlabExperience in AI/ML projects“Nice to have” skills:Java, Javascript, TypescriptMicrosoft Purview, Datahub or another data catalog toolData Modelling using Enterprise Architect and/or SHACLDatabricks, Dataiku or Azure Data FactoryMlflowData Science toolklits and libraries such as Kedro, Ray, Rill, DuckDb BackstageResponsibilities:Design, Build and Configure the model inventoryIntegrate the inventory with other tools and data platforms in the firm – these range from standard products such as mlflow, Databricks Unity Catalog through to custom inhouse solutionsWe work in an agile methodology and create appropriate design artifacts as per our inhouse methodologyWe offer:Possibility of remote workWorking in big-scale, international projectMultisport card and Private Healthcare
Middle Big Data (with Python) Engineer
Capgemini Polska, Poznań, Gdańsk, Katowice, Wrocław, Lublin, Warszaw ...
Recruitment process for this position is conducted online.Capgemini Engineeringis a world leader inengineering and R&D services. We combine ourbroad industry knowledgeand cutting-edge technologies in digital and software tosupporttheconvergenceof the physical and digital worlds. Every day, we help our clients to accelerate their journey towardsIntelligent Industry.At Capgemini,we are aresponsibleanddiverse.Our organization has a strong 55-year heritage and deep industry expertise. We aretrustedby our clients to address the entire breadth of their business needs—fromstrategyanddesigntooperations. Our actions are fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering, and platforms.About the project:As a Big Data Engineer, you will work on one of the world's largest social media platform which deals with a few petabytes of data coming to the system daily. You will contribute as part of R&D self-organized team working in a challenging, innovative environment for our client.Investigate, create, and implement the solutions for many technical challenges using cutting edge technologies, including building/enhancing BigData processing platform enabling work of software used by hundreds of millions of users.Your future tasks:Obtains tasks from the project lead or Team Lead (TL), prepares functional and design specifications, approves them with all stakeholdersEnsures that assigned area/areas are delivered within set deadlines and required quality objectivesProvides estimations, agrees task duration with the manager and contributes to project plan of assigned areaEvaluating existing data systemsUpdating and optimizing local and metadata modelsDesign, implement and maintain pipelines that produce business critical data reliably and efficiently using cloud technologyReports about area readiness/quality, and raise red flags in crisis situations which are beyond his/her AOROur Requirements:University degree in Computer Related Sciences or similar2+ years of experience as Big Data EngineerPython coding skillsEstablished big-data systems experience in Hadoop, Spark/SparkSQL, HiveSQLExperience with large-scale graph data storageStrong OOP skillsEffective communication (oral & written), collaboration, and interpersonal skillsNice to have:Experience with Airflow, Apache AtlasExperience with AWSExperience with KafkaExperience with JavaScript and GoWhat have we prepared for you?A lot of benefits:Flexible working hoursEquipment package for home officePrivate medical careandlife insurancewith ability to buy additional packages (e.g., dental care, senior care, oncology) on preferential terms.Access toCapgemini Helplinewith possibility to chat with therapists.Bonusesfor recommending your friends to Capgemini.Access toInspiroapp with richaudiobooksdatabaseAccess to ourNAISbenefit platform (40+ optionsavailable:Netflix,SpotifyMultisport, cinema tickets, etc.)Personal and professional development:70+training trackswithcertificationopportunities (e.g. MS Azure, AWS, Google, Cloud) on ourNEXTtraining platform.Platform with free access toPluralsight,TED TalksCourseramaterials and trainings.Free access toUdemy Businessaccountwith ability to useduring and outside working hours.Transparentperformancemanagement policy.Our legendary atmosphere:We value teamwork and good relationships. We work together, drink coffee together, and form friendships both inside and outside of work.No official dress-code.Variouscommunities: OUTfront, Women@Capgemini, Foreigners Community, and more.Day off for volunteeringAbility toimplementworld-changing initiativesthanks to ourGrantProgramThe award of "Top Employer Poland 2024" and "Top Employer Europe 2024" - proof of our commitment to creating an exceptional work environment and caring for the development of our employees.​​​​​​Who are we?Being one of us meansconstant developmentamong other great people. It's a team who you want to spend time with, during and after work. Trainings and initiatives make your daily tasks more interesting, fun, and unique.Capgemini Engineeringhas 65,000 engineer andscientistteam members in over 30 countries across sectorsincludingAutomotive,AI and Data,Software & Internet,Telecommunication, Rail, Infrastructure and Transportation, Defense, Aeronautics, Energy, Communications, Semiconductor& Electronics, Industrial & Consumer.Join us on a journeytowards Intelligent Industry!It's time to#Get the Future You Want!Your life is in your hands, and you have the opportunity to improve it, develop yourself, and simply—join us :)Do you want to get to know us better? Visit our Capgemini Engineeringwebsite!Do you have any additional questions about working at Capgemini? Check ourInstagram —@capgeminiplor visit ourFacebookprofile—CapgeminiEngineering. You can also find us onTikTok!—@capgeminipl.
Senior Big Data Engineer (Python, Javascript, Apache Atlas) (Pinterest)
Capgemini Polska, Poznań, Gdańsk, Katowice, Wrocław, Lublin, Warszaw ...
We work in a hybrid model!Recruitment process for this position and onboarding trainings are conducted online.Capgemini Engineeringis a world leader inengineering and R&D services. We combine ourbroad industry knowledgeand cutting-edge technologies in digital and software to support theconvergenceof the physical and digital worlds. Every day, we help our clients to accelerate their journey towardsIntelligent Industry.At Capgemini,we are aresponsibleanddiverse.Our organization has a strong 55-year heritage and deep industry expertise. We are trusted by our clients to address the entire breadth of their business needs—fromstrategyanddesigntooperations. Our actions are fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering, and platforms.Capgemini Engineeringprovides premium Software Engineering services to leading technology companies. Our customers usually range from startup to high-growth and VC-backed companies, which drives a culture of acceleration and innovation. We are sure that team extension is the only engagement model that works best.The clientis a global visual inspiration social media platform used by people around the world to shop products personalized to their taste, find ideas to do offline and discover the most inspiring creators. Today, more than 450 million people come to the platform every month to explore and experience billions of ideas that have been saved. The company monetizes its social media websites by selling digital advertising.The projectis all about large-scale Data processing. As a Senior Engineer, you will work on one of the world's largest social media platform which deals with a few petabytes of data coming to the system daily. You will contribute as part of R&D self-organized team working in a challenging, innovative environment for our client. You are going to Investigate, create, and implement the solutions for many technical challenges using cutting edge technologies, including building/enhancing Big Data processing platform enabling effective functioning of software used by hundreds of millions of users.The teamis am international one; all team members are divided into Scrum teams consisting of 5-7 people.Your future tasks:Obtain tasks from the project lead or Team Lead (TL), prepare and design specifications, and approve them with all stakeholders;Ensure that assigned tasks are executed within set deadlines and required quality objectives;Provide estimations and contribute to the project plan or the assigned area;Analyze the scope of alternative solutions and make decisions about the implementation area based on experience and technical expertise;Lead the functional and architectural design of the assigned areas, making sure design decisions on the project meet architectural and design requirements;Address risks, provide and implement mitigation plans;Initiate and conduct code reviews, create code standards, conventions, and guidelines;Suggest technical and functional improvements to add value to the product.Our expectations:Extensive experience asBig DataEngineerSolidPythonandJavacoding skillKnowledge ofJavaScriptand/orGOEstablished big-data systems experience inHadoop, Spark/SparkSQL, HiveSQLExperience with large-scale graph data storageStrong OOP skillsEffective communication, collaboration, and interpersonal skillsResult oriented approachNice to have:Experience with Airflow, Apache AtlasExperience with AWS.Experience with Kafka.What have we prepared for you?A lot of benefits:Flexible working hoursEquipment package for home officePrivate medical careandlife insurancewith ability to buy additional packages (e.g., dental care, senior care, oncology) on preferential terms.Access toCapgemini Helplinewith possibility to chat with therapists.Bonusesfor recommending your friends to Capgemini.Access toInspiroapp with richaudiobooksdatabaseAccess to ourNAISbenefit platform (40+ optionsavailable:Netflix,SpotifyMultisport, cinema tickets, etc.)Personal and professional development:70+training trackswithcertificationopportunities (e.g. MS Azure, AWS, Google, Cloud) on ourNEXTtraining platform.Platform with free access toPluralsight,TED TalksCourseramaterials and trainings.Free access toUdemy Businessaccountwith ability to useduring and outside working hours.Transparentperformancemanagement policy.Our legendary atmosphere:We value teamwork and good relationships. We work together, drink coffee together, and form friendships both inside and outside of work.No official dress-code.Variouscommunities: OUTfront, Women@Capgemini, Foreigners Community, and more.Day off for volunteeringAbility toimplementworld-changing initiativesthanks to ourGrantProgramWho are we?Being one of us meansconstant developmentamong other great people. It's a team who you want to spend time with, during and after work. Trainings and initiatives make your daily tasks more interesting, fun, and unique.Capgemini Engineeringhas 65,000 engineer andscientistteam members in over 30 countries across sectorsincludingAutomotive,AI and Data,Software & Internet,Telecommunication, Rail, Infrastructure and Transportation, Defense, Aeronautics, Energy, Communications, Semiconductor& Electronics, Industrial & Consumer.Join us on a journeytowards Intelligent Industry!It's time to#Get the Future You Want!Your life is in your hands, and you have the opportunity to improve it, develop yourself, and simply—join us :)Do you want to get to know us better? Visit our Capgemini Engineeringwebsite!Do you have any additional questions about working at Capgemini? Check ourInstagram —@capgeminiplor visit ourFacebookprofile—CapgeminiEngineering. You can also find us onTikTok!—@capgeminipl.
Architekt Danych w Biurze Zarządzania Danymi i Big Data
Bank Polska Kasa Opieki S.A., Warszawa, mazowieckie
Wypracowanie Korporacyjnego Modelu Informacji (KMI), jego aktualizację oraz właścicielstwo. Współtworzenie i utrzymanie mapy danych korporacji (biznesowe znaczenie danych, ich przepływy, powiązane procesy biznesowe, jakość). Utrzymywanie i rozwój standardów związanych z przechowywaniem i wykorzystaniem danych (np. w odniesieniu do: Hurtownia Danych, Data Mart, ODS, dane referencyjne). Utrzymywanie i rozwój standardów dotyczących modelowania encji, transformacji danych, koordynacji prac w zakresie modelowania danych w korporacji. Koordynację prac związanych z optymalnym wykorzystaniem zgromadzonych danych (unikanie duplikacji i zbędnych transferów, rekomendacje nowych źródeł danych). Koordynację prac związanych z zarządzaniem danych słownikowych. Koordynację relacji pomiędzy kluczowymi interesariuszami na etapie projektowania, zarządzania i wdrażania zmiany biznesowej. Docelowo, definiowanie i zatwierdzanie zmian do modeli i projektów danych w środowisku systemów informacyjnych (w tym ekosystemu korporacyjnej hurtowni danych, data lake/hub itp.) oraz właścicielstwo modeli danych EDW. Ścisłą współpracę z adekwatnymi obszarami w IT.   Wiedzę i doświadczenie z zakresu wdrażania i stosowania zasad architektury informacyjnej/danych w dużej organizacji. Rozumiesz ideę korporacyjnych (konceptualnych) modeli informacji niezależnych od systemów (tzw. EIM – Enterprise Information Model). Masz doświadczenie w definiowaniu i zarządzaniu metadanymi (katalog danych, pojęcia biznesowe itp.). Posiadasz umiejętności i doświadczenie w modelowaniu danych oraz rozumiesz zagadnienia związane z zarządzaniem informacją. Posiadasz zdolności rozumienia potrzeb i interesów różnych grup użytkowników/klientów. Potrafisz modelować dane w odpowiednich narzędziach (np. Erwin, Power Designer, IBM Data Architect). Nie jest Ci obca znajomość rozwiązań z obszaru zarządzania informacją (np. Ab Initio Metadata Hub, IBM InfoSphere, Informatica, Colibra itp.). Mile widziane doświadczenie w obszarach: hurtowni danych, BI, data science itp. Zatrudnienie w ramach umowy o pracę. Premię uzależnioną od wyników i zaangażowania. Prywatną opiekę medyczną dla Ciebie i Twojej rodziny na preferencyjnych warunkach. Kartę MultiSport i Ubezpieczenie Grupowe na korzystnych warunkach. System szkoleń i programów rozwojowych. Dostęp do Wewnętrznej Giełdy Pracy. Przyjazną atmosferę w pracy.