Note: This fails when instructed to generate Turtle, due to problem with continuation when the code-block reaches its threshold. Prompt 1: Generate an RDF-based description of the each job posting along the following lines : 1. Use JSON-LD Notation 1. Name subjects and objects using a hash-based hyperlink 1. Use https://enterprise-knowledge.com/careers/current-openings/ for @base 1. Use terms from Schema.org 1. Leave out schema:description 1. Describe eqch qualification 1. Write response to a code-block Prompt 2: This is incomplete, so resume. # Data Engineer Location: Arlington, Virginia, United States Full-timePartially remote Enterprise Knowledge (EK) is hiring for a full-time Data Engineer to join our growing Data and Information Management Practice. This individual will support dynamic and fast-paced technical delivery projects for a range of commercial and federal clients that provide the opportunity to learn fast and grow quickly. We are seeking a well-organized, curious, and quick learner who will tackle diverse challenges with creative solutions. The right candidate will have a passion for working with diverse data types and applying new methods and approaches to data challenges including managing data at scale using popular data manipulation frameworks backed by scalable cloud architecture. The data engineer will be part of a team working on cutting-edge projects developing an orchestrated data solution to integrate and transform source data to build a Knowledge Graph that powers advanced search and Artificial Intelligence (AI) solutions. As an EK team member, you will join a fast-growing company that is committed to diversity and inclusion, have the opportunity to work in a collaborative workplace, take advantage of our unique benefits, and help build our innovative culture. Don’t just take our word for it – we were recently featured as one of Inc. Magazine’s 2022 Best Workplaces! ## Required Skills and Qualifications: Experience in development using leading data manipulation frameworks, especially Python or R, and associated libraries such as pandas and dplyr. Able to carefully identify and understand client needs in order to design practical solutions that surpass their expectations. Minimum of 3 years of experience developing data pipelines or data products and services such as microservices or APIs. Experience implementing data transformation pipelines using data orchestration or ETL workflows such as with tools like Apache Airflow or AWS Step Functions. Experience with multiple data structures and tools such as relational databases, graph databases, document stores, search indexes, etc., and multiple data serialization formats such as CSV, JSON, Parquet, or HDFS. Proficiency in at least one database query language such as SQL, SPARQL, or Gremlin. Proven experience working directly with clients, providing briefings, facilitating meetings, and presenting work products. All of our employees are required to be fully vaccinated against COVID-19 regardless of the employee's location or work arrangement (e.g., telework, remote work, etc.), subject to such exceptions as required by law. If you are hired, we will require you to prove that you have received the COVID-19 vaccine, unless you have received a medical or religious exemption. Preferred Skills and Qualifications: Experience with the graph data structure such as Knowledge Graphs or other graph analytics use cases. Working knowledge or experience with implementing Machine Learning workflows including Exploratory Data Analysis, Feature Engineering, Model Training, and Cross-Validation. Implementation of distributed data processing frameworks such as Apache Spark, Kolas, or Dask. Experience building, deploying and supporting cloud-based infrastructure such as AWS and understanding of key cloud/IT principles such as networking, permissions, basic server management, and Infrastructure as Code (IaC) templating. # Additional Information “We are an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.” Response: ## JSON-LD Start ## { "@context": "https://schema.org/", "@type": "JobPosting", "title": "Data Engineer", "employmentType": [ "Full-time", "Partially remote" ], "jobLocation": { "@type": "Place", "address": { "@type": "PostalAddress", "addressLocality": "Arlington", "addressRegion": "Virginia", "addressCountry": "United States" } }, "hiringOrganization": { "@type": "Organization", "name": "Enterprise Knowledge", "url": "https://enterprise-knowledge.com/careers/current-openings/" }, "qualifications": [ { "@type": "QualitativeValue", "description": "Experience in development using leading data manipulation frameworks, especially Python or R, and associated libraries such as pandas and dplyr." }, { "@type": "QualitativeValue", "description": "Able to carefully identify and understand client needs in order to design practical solutions that surpass their expectations." }, { "@type": "QualitativeValue", "description": "Minimum of 3 years of experience developing data pipelines or data products and services such as microservices or APIs." }, { "@type": "QualitativeValue", "description": "Experience implementing data transformation pipelines using data orchestration or ETL workflows such as with tools like Apache Airflow or AWS Step Functions." }, { "@type": "QualitativeValue", "description": "Experience with multiple data structures and tools such as relational databases, graph databases, document stores, search indexes, etc., and multiple data serialization formats such as CSV, JSON, Parquet, or HDFS." }, { "@type": "QualitativeValue", "description": "Proficiency in at least one database query language such as SQL, SPARQL, or Gremlin." }, { "@type": "QualitativeValue", "description": "Proven experience working directly with clients, providing briefings, facilitating meetings, and presenting work products." }, { "@type": "QualitativeValue", "description": "Experience with the graph data structure such as Knowledge Graphs or other graph analytics use cases." }, { "@type": "QualitativeValue", "description": "Working knowledge or experience with implementing Machine Learning workflows including Exploratory Data Analysis, Feature Engineering, Model Training, and Cross-Validation." }, { "@type": "QualitativeValue", "description": "Implementation of distributed data processing frameworks such as Apache Spark, Kolas, or Dask." }, { "@type": "QualitativeValue", "description": "Experience building, deploying and supporting cloud-based infrastructure such as AWS and understanding of key cloud/IT principles such as networking, permissions, basic server management, and Infrastructure as Code (IaC) templating." } ], "description": "Enterprise Knowledge (EK) is hiring for a full-time Data Engineer to join our growing Data and Information Management Practice. This individual will support dynamic and fast-paced technical delivery projects for a range of commercial and federal clients that provide the opportunity to learn fast and grow quickly. We are seeking a well-organized, curious, and quick learner who will tackle diverse challenges with creative solutions. The right candidate will have a passion for working with diverse data types and applying new methods and approaches to data challenges including managing data at scale using popular data manipulation frameworks backed by..", "skills": [ { "@type": "DefinedTerm", "name": "Python", "value": "development using leading data manipulation frameworks" }, { "@type": "DefinedTerm", "name": "R", "value": "development using leading data manipulation frameworks" }, { "@type": "DefinedTerm", "name": "pandas", "value": "development using leading data manipulation frameworks" }, { "@type": "DefinedTerm", "name": "dplyr", "value": "development using leading data manipulation frameworks" }, { "@type": "DefinedTerm", "name": "data pipeline", "value": "minimum of 3 years of experience developing data pipelines or data products and services such as microservices or APIs" }, { "@type": "DefinedTerm", "name": "ETL", "value": "experience implementing data transformation pipelines using data orchestration or ETL workflows such as with tools like Apache Airflow or AWS Step Functions" }, { "@type": "DefinedTerm", "name": "database", "value": "proficiency in at least one database query language such as SQL, SPARQL, or Gremlin" }, { "@type": "DefinedTerm", "name": "graph data structure", "value": "experience with the graph data structure such as Knowledge Graphs or other graph analytics use cases" }, { "@type": "DefinedTerm", "name": "Machine Learning", "value": "working knowledge or experience with implementing Machine Learning workflows including Exploratory Data Analysis, Feature Engineering, Model Training, and Cross-Validation" }, { "@type": "DefinedTerm", "name": "distributed data processing", "value": "implementation of distributed data processing frameworks such as Apache Spark, Kolas, or Dask" }, { "@type": "DefinedTerm", "name": "cloud-based infrastructure", "value": "experience building, deploying and supporting cloud-based infrastructure such as AWS and understanding of key cloud/IT principles such as networking, permissions, basic server management, and Infrastructure as Code (IaC) templating" } ], "vaccinationRequirement": "All of our employees are required to be fully vaccinated against COVID-19 regardless of the employee's location or work arrangement (e.g., telework, remote work, etc.), subject to such exceptions as required by law. If you are hired, we will require you to prove that you have received the COVID-19 vaccine, unless you have received a medical or religious exemption.", "hiringOrganization": { "@type": "Organization", "name": "Enterprise Knowledge", "url": "https://enterprise-knowledge.com/careers/current-openings/", "logo": "https://enterprise-knowledge.com/wp-content/uploads/2016/03/EK_logo_s_550x220.png", "sameAs": [ "https://www.linkedin.com/company/enterprise-knowledge/", "https://twitter.com/EntKnowledge", "https://www.facebook.com/EnterpriseKnowledge/" ] } } ## JSON-LD End ##