How to get employee details of a company

Scrapingdog
3 min readJan 11, 2024

Introduction

Requirements

I hope you have already installed Python 3.x on your machine and if not then you can install it from here. Once that is done you should create a folder in which we will keep our Python scripts.

mkdir employee
cd employee

Once you are inside your employee folder install these public Python libraries.

  • Requests– This will be used for making the HTTP connection with enrichment API.
  • Pandas- This library will help us create a CSV file from the JSON data we receive from the API.
pip install requests
pip install pandas

One more step is left and that is to sign up for the free account of Enrichment API. The free pack comes with a generous 50 API calls which is enough for testing the API.

Getting Employees of Google

For this example, we are going to retrieve employees of Google using the Enrichment API and then later we are going to save the data to a CSV file using pandas.

Once you sign up for the free account you will redirected to the dashboard where you will find this API key.

You have to use this API key in the code we are going to write. Before you start coding you should go through the documentation of the API. We will be using the /employees endpoint to get the data.

Our goal for this blog is to obtain a list of employees who are Software Engineer, work at Google, and come from Boston.

import requests
import pandas as pd


url = 'https://api.enrichmentapi.io/employees'
params = {
'api_key': 'Your-API-Key',
'domain': 'google.com',
'city': 'boston',
'position': 'Software Engineer',
'size': 100
}

resp = requests.get(url, params=params)


l=resp.json()

df = pd.DataFrame(l['employees_data'])
df.to_csv('emp.csv', index=False, encoding='utf-8')

In the above code, we passed the domain as google.com, the city as Boston, the position as Software Engineer, and the size(number of employees) as 100.

  1. l = resp.json() extracts the JSON data from the resp object . The json() method parses the response content as JSON and stores it in the variable l. The JSON data is expected to contain information about employees.
  2. df = pd.DataFrame(l['employees_data'])creates a Pandas DataFrame (df) from the 'employees_data' key in the JSON data stored in the variable l. The assumption here is that the JSON data has a structure where 'employees_data' contains a list of dictionaries, and each dictionary represents data for an employee.
  3. df.to_csv('emp.csv', index=False, encoding='utf-8') takes the DataFrame df and saves its contents to a CSV file named 'emp.csv'. The index=False argument ensures that the DataFrame's index is not included as a separate column in the CSV file. The encoding='utf-8' specifies the character encoding to use when writing the CSV file.

Once you run this code you will find a CSV file inside your employee folder.

You have a beautiful CSV file that contains data like first name, last name, job title, LinkedIn profile urls, and an email API URL through which you can get their company emails as well.

Now, if you need more information about each individual in the above list then you can use Person API to get full details like experience, education, awards, etc.

--

--

Scrapingdog

I usually talk about web scraping and yes web scraping only. You can find a web scraping API at www.scrapingdog.com