Apisero is now part of NTT DATA - Learn more.

Search
Close this search box.

API Automation Made Easy: Harness The Power Of Newman & Compare Extracts Against Baseline Using Python

Author: Aswath Madhubabu

Tired of spending countless hours testing APIs and comparing results with benchmarks?

In this article, we will introduce Newman – A postman tool, a powerful tool that will change our approach to API testing. But it doesn’t end there! We’ll also cover how we  can use python and its libraries to save time by easily comparing our captures to test data.

Get ready to unleash the potential of automation and elevate our API testing game like never before!

Introduction to API Automation
  • API automation refe­rs to the automated testing of APIs. The­ main objective of API automation is to enhance­ the efficiency of the­ testing process and ensure­ higher quality results. API automation enable­s the testing of both in-house de­veloped APIs and those provide­d by third-party providers. For testing in-house APIs, ope­n source tools like Newman can be­ use. Newman is a command-line tool spe­cifically designed for running API tests.
  • In this blog post, our main focus will be on utilizing Ne­wman to automate the testing proce­ss of RESTful web services. We’ll show how to compare this extract against the­ established baseline­ and generate a compre­hensive report base­d on the results obtained.
  • We can utilize this te­chnique to swiftly detect any alte­rations in the API response that might possibly disrupt our application.
“Automate our APIs, and let the machines do heavy lifting.”
Problem Statement

REST APIs which are capable of executing a process and writing the output to the database have to be tested whether, given the right set of parameters, does the update as expected.

Given a baseline whether my API is producing the expected output post execution. 

The API has two steps:

  • Execute the job.
  • Periodically calling a status API to verify job for the submitted ID has been complete.
Manual Steps:
  • Create multiple replicas of the postman collection for each test case that was used to test the APIs.
  • Execute each of them one by one and verify the status by calling the Get Status API often. 
  • Regenerate the token in case one of the tokens expires (If API is authenticated for every request). 
  • Once executed with failure status, call status API to verify which step failed. 
  • Once executed with a success status, run the query individually for output tables. Download output content as CSV. 
  • Convert the outputs to custom format used to maintain baseline test cases so that they can be compared with baseline. 
  • Compare the baseline with extracted data using any diff checked. 
Automated steps 

Execute the utility with test cases and visualize the report for status.

Sit back & relax ! 
Overall Architecture

The purpose­ is to be able to run the tasks give­n by the user and then re­trieve the re­sults from the target database. We­ will then analyze these­ outputs in comparison with a baseline and create­ a report highlighting any difference­s that may exist.

What does generate postman collections mean ?
  • Postman utilizes a JSON format to store­ collections, including environment variable­s and their corresponding values. Once­ imported into Postman, these colle­ctions can be execute­d, and the console allows for running the API and vie­wing its status.
  • The conce­pt behind the utility is to create­ a Postman collected in a structured seque­ntial manner, allowing it to be exe­cuted using Newman.
  • Consider a classic e­xample of an API that requires a toke­n to run and submits a job upon execution. In addition, there­ is a status API available to retrieve­ the current progress pe­riodically.

Step 1 : Authentication using token.

In the above example, we retrieve the token and set it as a collection variable. 

Subsequent jobs will be using the token from the collection’s token as in the collection level we’re using the variable BEARER_TOKEN.

Step 2 : Execute the job and chain the fetch status job.

  • With the above example, we’re executing the job.
  • First, it checks the HTTP response code (pm.response.code). Its objective is to either be 202 or 200.
  • If the code is 202 or 200 then we do this:
    • – Log the jobId from the JSON response to the console.
    • – Create a collection variable and name it ‘job_id’. It has a value of data.jobId
    • – Set up the next request in the Postman collection to “GetJobStatus.”
  • Now if the code isn’t 202 or 200 then we try to execute a test with pm.response.to.have.status(201) or pm.response.to.have.status(200).
  • If that fails (response code isn’t 201 or 200), then it sets the next request to null. This ends any other requests after it, and throws an error with the message of the caught exception.
Wrote our first API test case and chained the requests

Step 3 : Periodically check the status until success or failure.

In the above example, we’re executing the status check periodically. Now when we export this entire collection onto a JSON format. These can be executed by Newman.

What is Newman and What Does it Do?
  • Newman is a powerful tool for automated API testing from the command line. It can test all sorts of APIs and even websites, especially REST APIs and SOAP web services. HTTP methods like GET, POST, PUT, DELETE, and PATCH are all available with Newman. Additionally, it has ways to validate field data.
  • Running API tests with Newman can be done through the console or in a CI/CD pipeline. We can also use other models such as Mocha, Jasmine and AVA for more experimental uses.
  • To start using Newman we have to install Node.js on our computer. When we’re finished with that, install Newman by putting in this command:
  • npm install -g newman
  • There’s a lot more we can do with Newman than we think! So feel free to click the link here to learn how https://learning.postman.com/docs/collections/using-newman-cli/command-line-integration-with-newman/ 
Setting Up Newman
  • Newman is a powerful command-line collection runner for the Postman. It allows us to run and test a Postman Collection, directly from the command line. 
  • Once we have Newman installed, we can use it to run Collections from the command line. To do this, we will need to use the Newman run command followed by the path to our Collection JSON file or folder. For example:
newman run collection.json
  • This will execute all of the requests in our Collection. We can also specify which environment to use with the -e flag followed by the path to our environment file:
newman run collection.json -e environment.json
Parallel execution with Newman

Newman as a tool can also be run on the command line in the machine where it’s installed. Let’s see how we can run multiple API tests at one shot yet executing each of them and reporting the status. 

def run_job(collection_paths, extract_path):
	logging.info("Starting to run the calc plan API")
	command = []
	for collection in collection_paths:
    	single_command = f"newman run {collection}" \
                     	f" --reporters=cli,htmlextra --reporter-htmlextra-export " \
                     	f"{extract_path}newman-report/"
    	command.append(single_command)
	command.append("echo 'done'")

	logging.info("Executing command " + str(command))

	processes = [subprocess.Popen(i, stdout=sys.stdout, stderr=subprocess.STDOUT, shell=True) for i in command]
	for process in processes:
    	response, error = process.communicate()
    	if error:
        	logging.info('Error while running the command')
        	sys.exit()
    	else:
        	logging.info('API execution is successful. Starting extract generation')

This code runs a set of Postman collections using Newman, and generates HTML reports for each collection run. Here’s a concise explanation :

  • It constructs a list of commands to run Postman collections using Newman, where each command specifies the collection file path, reporters (CLI and HTMLExtra), and the destination path for HTML reports.
  • It spawns subprocesses for each command, running the Postman collections in parallel. It captures the output (stdout) and errors (stderr) from each subprocess.
  • If there are no errors during the execution of the subprocesses, it logs a success message and proceeds with an extract generation task. If any subprocess fails, it logs an error message and exits the script.
Benefits of Using Newman

Newman is a very important tool that can be used to automate our API testing. As an addition, Newman can also make it easier to generate reports on the results of said tests.

  • Automate API Testing: We can automate the process of testing APIs for accuracy and functionality with Newman. 
  • Generate Reports: Newman allows us to easily generate reports on API testing results as well.
  • Debugging & Troubleshooting: On top of all that, Newman even includes debugging tools. So if anything goes wrong with our API we won’t have to spend as much time fixing the issues. This can save time and resources when working on development or testing of our APIs.
Comparing Extracts Against Baseline

In data analysis, one of the crucial tasks is to compare two datasets, the baseline data and the data extracted from a different source or at a different time. Finding discrepancies, missing values, and other anomalies can seem like an impossible task, but it’s essential for making informed decisions.

In this article we’ll explore how to compare baseline and extracted data using Python’s powerful data manipulation library, Pandas.

Section 1: Reading Data into DataFrames

The first step in comparing baseline and extracted data is to load them into Pandas DataFrames. We can use the pd.read_csv() or pd.read_table() functions to read data from CSV or custom files, respectively. 

Here’s an example:

import pandas as pd

def read_df(baseline_path: str, extracted_path: str):
   baseline = (
       pd.read_table(
           baseline_path,
           sep="|",
           header=0,
           index_col=0,
           skipinitialspace=True,
       )
       .dropna(axis=1, how="all")
       .iloc[1:]
       .applymap(lambda x: x.strip() if type(x) == str else x)
   )
   extracted = (
       pd.read_table(
           extracted_path,
           sep="|",
           header=0,
           index_col=0,
           skipinitialspace=True,
       )
       .dropna(axis=1, how="all")
       .iloc[1:]
       .applymap(lambda x: x.strip() if type(x) == str else x)
   )

   baseline.columns = baseline.columns.str.strip()
   extracted.columns = extracted.columns.str.strip()

   baseline = baseline.reset_index(drop=True)
   extracted = extracted.reset_index(drop=True)
   return baseline, extracted
Section 2: Data Exploration and Preprocessing

Before comparing the data, it’s essential to perform data exploration and preprocessing. This includes handling missing values, removing duplicates, and converting data types. Pandas offers a wide range of functions for these tasks. For example:

# Handle missing values
extracted = extracted.reset_index(drop=True)

# Dropping duplicates
extracted.drop_duplicates(inplace=True)

# Replace "NaN" values with empty strings
extracted = extracted.replace("nan", "")
extracted = extracted.fillna("")
Section 3: Data Comparison

Now, let’s get to the heart of data comparison. We can compare data in various ways, such as column-wise or row-wise comparison, and perform statistical analysis. Pandas makes it easy to compare data frames:

# Column-wise comparison
column_differences = baseline_df.compare(extracted_df)

# Row-wise comparison (matching rows)
matching_rows = baseline_df.merge(extracted_df, on='ID', how='inner')

# Identify if baseline is exact same as extracted
def is_same(baseline: DataFrame, extracted: DataFrame) -> bool:
	return extracted.equals(baseline)

# Identify if schema is matching between baseline and extracted
def is_schema_same(baseline: DataFrame, extracted: DataFrame) -> bool:
	return len(set(baseline.dtypes).difference(set(extracted.dtypes))) == 0

# Left to right comparison

def baseline_minus_extract(
	baseline: DataFrame, extracted: DataFrame
) -> DataFrame:
	return baseline.merge(extracted, how="outer", indicator=True).loc[
    	lambda x: x["_merge"] == "left_only"
	]

# Right to left comparison
def extract_minus_baseline(
	baseline: DataFrame, extracted: DataFrame
) -> DataFrame:
	return baseline.merge(extracted, how="outer", indicator=True).loc[
    	lambda x: x["_merge"] == "right_only"
	]

# Values changed but row almost exist between baseline and extracted
def values_changed_baseline_and_extract(
	baseline: DataFrame, extracted: DataFrame
) -> DataFrame:
	return pd.concat([baseline, extracted]).drop_duplicates(keep=False)
Section 4: Contrasting Differences Between Baseline and Extracted Data

When comparing baseline and extracted data, we may encounter various differences, including missing values, value discrepancies, and structural differences in report format. 

Conclusion

There’s an open source software for API automation called Newman. It works with Compare Extracts tools by python and pandas. Together, they’re really powerful. With it being open source and easy to integrate into current CI/CD pipelines, we don’t have to worry about setup and configuration. In addition, we can use this tool with Python to easily automate any type of test and validate the results against a baseline.

We use cookies on this site to enhance your user experience. For a complete overview of how we use cookies, please see our privacy policy.