10 -maths and statistical analysis One liner

by SkillAiNest

10 -maths and statistical analysis One liner10 -maths and statistical analysis One liner
Photo by Author | Ideogram

It also makes it possible to perform complex mathematics and statistics operations with a remarkable comprehensive code with its built -in modules and external libraries.

In this article, we will forward some useful one -liners for mathematics and statistical analysis. These One Liners show that the data with the minimum code maintaining the ability and performance of the reading out of the data.

🔗 🔗 Link from the code on the Gut Hub

Sample data

Before we code our One Liners, let’s create some sample datases to work:

import numpy as np
import pandas as pd
from collections import Counter
import statistics

# Sample datasets
numbers = (12, 45, 7, 23, 56, 89, 34, 67, 21, 78, 43, 65, 32, 54, 76)
grades = (78, 79, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 96)
sales_data = (1200, 1500, 800, 2100, 1800, 950, 1600, 2200, 1400, 1750,3400)
temperatures = (55.2, 62.1, 58.3, 64.7, 60.0, 61.8, 59.4, 63.5, 57.9, 56.6)

Please note: Then in pieces of code, I have deleted print statements.

1. Calculate the meaning, median and mood

When analyzing datases, you often need a number of central trends to understand your data distribution. This one -liner counts all three important statistics in one expression, which provides a comprehensive overview of the central features of your data.

stats = (statistics.mean(grades), statistics.median(grades), statistics.mode(grades))

This expression uses the meaning of mathematics, the middle price, and the most frequently calculating the price in a topal assignment.

2. Find out -of -run using the Intercortal Range

It is important to identify out -of -run to detect the data quality diagnosis and irregularities. This one -liner implements the standard IQR procedure that flags the values that significantly fall beyond the normal limit, which helps you find potential admission errors or actually abnormal observations.

outliers = (x for x in sales_data if x < np.percentile(sales_data, 25) - 1.5 * (np.percentile(sales_data, 75) - np.percentile(sales_data, 25)) or x > np.percentile(sales_data, 75) + 1.5 * (np.percentile(sales_data, 75) - np.percentile(sales_data, 25)))

The understanding of this list calculates the first and third quarters, determines the IQR, and indicates 1.5 times more values than the quartel limits. Bowlin logic only filters the original datastas to return the values.

3. Calculate the intercourse between two variables

Sometimes, we need to understand the relationship between the variable. It counts the tune of the one -liner Pierceon’s connection, which contains the quantity of linear relationship strength between the two datases and provides immediate insights to their association.

correlation = np.corrcoef(temperatures, grades(:len(temperatures)))(0, 1)

The NUMPY CORCOEF Function returns a concrete matrix, and we remove the non -optional element representing the connection between our two variables. Slating ensures that the two rows have similar dimensions for the proper conduct.

np.float64(0.062360807968294615)

4. Summary of descriptive figures

A comprehensive statistical summary provides the necessary insights about the features of your data distribution. This One Liner develops a dictionary that includes key descriptive statistics, which present a complete picture of your dataset properties in a single expression.

summary = {stat: getattr(np, stat)(numbers) for stat in ('mean', 'std', 'min', 'max', 'var')}

It uses a dictionary understanding .getattr() To call for dynamically call NUMPY functions, prepare clean maping of statistics names for their calculated values.

{'mean': np.float64(46.8),
 'std': np.float64(24.372662281061267),
 'min': np.int64(7),
 'max': np.int64(89),
 'var': np.float64(594.0266666666666)}

5. Make data routine on z scores

Standard the data in Z scores enables meaningful comparisons in different scales and distribution. This one liner converts your raw data into standard units, and shows each value as the number of standard deviations from mid -.

z_scores = ((x - np.mean(numbers)) / np.std(numbers) for x in numbers)

The list of the list applies to the Z Score formula on each element, which means reduced and divided by standard deviation.

(np.float64(-1.4278292456807755),
 np.float64(-0.07385323684555724),
 np.float64(-1.6329771258073238),
 np.float64(-0.9765039094023694),
 np.float64(0.3774720994328488),
...
 np.float64(0.29541294738222956),
 np.float64(1.1980636199390418))

6. Calculate the average

Smiling the series of time helps reduce short -term fluctuations and noise. This one -liner counts the rolling average above a specific window, which provides a clean view of your data directional movement.

moving_avg = (np.mean(sales_data(i:i+3)) for i in range(len(sales_data)-2))

The understanding of the list produces overlaping windows of three consecutive values, calculating the middle of each window. This technique is especially useful for financial data, sensor readings, and any sequential measures where the trend is essential.

(np.float64(1166.6666666666667),
 np.float64(1466.6666666666667),
 np.float64(1566.6666666666667),
 np.float64(1616.6666666666667),
 np.float64(1450.0),
 np.float64(1583.3333333333333),
 np.float64(1733.3333333333333),
 np.float64(1783.3333333333333),
 np.float64(2183.3333333333335))

7. Find the highest price range

To understand data distribution samples, you often need to identify concentration areas within your datastas. This one -liner breaks your data into the limits and finds the most populous interval, which shows where your values are more dense.

most_frequent_range = Counter((int(x//10)*10 for x in numbers)).most_common(1)(0)

Feedback value values over decades, creating frequency counts by using CounterAnd brings out the most common limit. This approach is valuable valuable to understand the characteristics of the history of the history of the history and the data distribution without the complex plot.

8. Calculate the compound annual growth rate

Financial and business analysis often requires understanding the pace of growth over time. This one -liner compound counts the annual growth rate, which provides a standard of investment or business performance at different times.

cagr = (sales_data(-1) / sales_data(0)) ** (1 / (len(sales_data) - 1)) - 1

The formula takes the proportion of the final values from the initial values, lifting it into the strength of time -term co -operation, and reducing the growth rate one. This calculation assumes that each data point represents a time period in your analysis.

9. Computers’ overall

The overall calculation helps track progressive changes and identify infection points in your data. This one -liner operates overall, which shows how the values accumulate over time.

running_totals = (sum(sales_data(:i+1)) for i in range(len(sales_data)))

The understanding of the list gradually increases the slice from the beginning to every location, in which the total amount is calculated.

(1200, 2700, 3500, 5600, 7400, 8350, 9950, 12150, 13550, 15300, 18700)

10.

Related measures are needed to compare variations in datases with different scales. This One liner shows the standard deviation as an average percentage of the standard deviation for meaningful comparisons in different measurements units.

cv = (np.std(temperatures) / np.mean(temperatures)) * 100

The calculation divides the standard deviation through the middle and increases to 100 to show the results as percentage. This standard of change is especially useful when comparing datases with different units or scales.

np.float64(4.840958085381635)

Conclusion

These one -liner shows how to perform mathematics and statistical operations with the least code. Effective One Liner is balanced with the ability to read the key to writing the key, making sure that your code remains intact.

Remember that when one liner is powerful, complex analysis can easily benefit from breaking the operations in multiple steps.

Pray Ca Is a developer and technical author from India. She likes to work at the intersection of mathematics, programming, data science, and content creation. The fields of interest and expertise include dupas, data science, and natural language processing. She enjoys reading, writing, coding and coffee! Currently, they are working with the developer community to learn and share their knowledge with the developer community by writing a lesson, how to guide, feed and more. The above resources review and coding also engages lessons.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro