How to improve your codes even if you are an early

by SkillAiNest

How to improve your codes even if you are an earlyHow to improve your codes even if you are an early
Photo by Author | Ideogram

Let’s be honest. When you are learning, you are probably not thinking about performance. You are just trying to work your code! But the point here is: You don’t have to be a specialist programmer overnight.

With some easy techniques I will show you today, you can greatly improve the use of your code speed and memory.

In this article, we will go through a five -practical friendly correction technique simultaneously. Everyone’s i’ll, I will show you the “first” code (the way many initially writes it), the “after” code (better version), and explain why the improvement works and how fast it is.

🔗 🔗 Link from the code on the Gut Hub

1. Change the loops with an understanding of lists

Let us do something you do all the time: changing the current lists and creating new lists. Most of the initial people arrive for the loop, but for this, Azigar has a very fast way.

Before correction

Here will square the list of most initial numbers:

import time

def square_numbers_loop(numbers):
    result = () 
    for num in numbers: 
        result.append(num ** 2) 
    return result

# Let's test this with 1000000 numbers to see the performance
test_numbers = list(range(1000000))

start_time = time.time()
squared_loop = square_numbers_loop(test_numbers)
loop_time = time.time() - start_time
print(f"Loop time: {loop_time:.4f} seconds")

This code produces an empty list called, then loop through each number in our input list, squares it, and adds it to the list of results. Very straight, okay?

After the correction

Let’s rewrite it now using the list of understanding:

def square_numbers_comprehension(numbers):
    return (num ** 2 for num in numbers)  # Create the entire list in one line

start_time = time.time()
squared_comprehension = square_numbers_comprehension(test_numbers)
comprehension_time = time.time() - start_time
print(f"Comprehension time: {comprehension_time:.4f} seconds")
print(f"Improvement: {loop_time / comprehension_time:.2f}x faster")

This single line (num ** 2 for num in numbers) Like our loop, it works exactly, but it is telling Azigar “Make a list where each element is square of the same element in number.”

Output:

Loop time: 0.0840 seconds
Comprehension time: 0.0736 seconds
Improvement: 1.14x faster

Improvement in performance: The list of the list is usually 30-50 % faster than equal loops. Improvement is more noticeable when you work with very large iTerbles.

Why does it work? The understanding of the list is applied under the hood in C, so they avoid a lot of overheads that come with the loops of Azigar, things like variable search and function call that are behind the curtains.

2. Choose the correct data structure for the job

It’s a huge, and this is something that can only make your code more faster with a small change. The key understands when to use the list as compared to the lists vs. Dictionary.

Before correction

We say you want to find common elements between the two lists. Here is the intuitive approach:

def find_common_elements_list(list1, list2):
    common = ()
    for item in list1:  # Go through each item in the first list
        if item in list2:  # Check if it exists in the second list
            common.append(item)  # If yes, add it to our common list
    return common

# Test with reasonably large lists
large_list1 = list(range(10000))     
large_list2 = list(range(5000, 15000))

start_time = time.time()
common_list = find_common_elements_list(large_list1, large_list2)
list_time = time.time() - start_time
print(f"List approach time: {list_time:.4f} seconds")

This code is included in the first list, and each item’s IT, it checks that if the item is in the second list, the item is in the item. list2. Issue? When you do the item list2Azgar has to look for the whole list until he finds the item. It’s slow!

After the correction

Here is the same logic, but using a set for sharp eyes:

def find_common_elements_set(list1, list2):
    set2 = set(list2)  # Convert list to a set (one-time cost)
    return (item for item in list1 if item in set2)  # Check membership in set

start_time = time.time()
common_set = find_common_elements_set(large_list1, large_list2)
set_time = time.time() - start_time
print(f"Set approach time: {set_time:.4f} seconds")
print(f"Improvement: {list_time / set_time:.2f}x faster")

First, we turn the list into a set. Then, instead of whether the item has come into list2We check whether the item is set2. This small change is almost immediately testing the membership.

Output:

List approach time: 0.8478 seconds
Set approach time: 0.0010 seconds
Improvement: 863.53x faster

Improvement in performance: This can be the largest datases of 100x high speed sequence.

Why does it work? Set the hash tables under the hood. When you check if an item is in a seat, azagar does not find through every element. It uses the hash directly to jump where the item should be. It is like reading every page instead of reading the book index to find your wish.

3. Use the built -in functions of thezigar whenever possible

Tin built -in comes with the functions that make heavy reforms. Before you write your own loop or custom function to do something, check if Azigar already has a function for it.

Before correction

If you don’t know about the built -in, how can you calculate the amount and more of this list.

def calculate_sum_manual(numbers):
    total = 0
    for num in numbers:  
        total += num     
    return total

def find_max_manual(numbers):
    max_val = numbers(0) 
    for num in numbers(1:): 
        if num > max_val:    
            max_val = num   
    return max_val

test_numbers = list(range(1000000))  

start_time = time.time()
manual_sum = calculate_sum_manual(test_numbers)
manual_max = find_max_manual(test_numbers)
manual_time = time.time() - start_time
print(f"Manual approach time: {manual_time:.4f} seconds")

sum The function starts at 0 as a whole, then adds each number to this tomorrow. max The function begins to assume that the first number is maximum, then to compare each other number to see if it is bigger or not.

After the correction

Here is the same thing using Azigar’s built -in functions:

start_time = time.time()
builtin_sum = sum(test_numbers)    
builtin_max = max(test_numbers)    
builtin_time = time.time() - start_time
print(f"Built-in approach time: {builtin_time:.4f} seconds")
print(f"Improvement: {manual_time / builtin_time:.2f}x faster")

This is! sum() Gives the total number of numbers in the list, and max() The largest number returns. The same result, very fast.

Output:

Manual approach time: 0.0805 seconds
Built-in approach time: 0.0413 seconds
Improvement: 1.95x faster

Improvement in performance: Built -in functions are usually faster than manual implementation.

Why does it work? The built -in functions are written in C and heavy reforms have been made.

4. Perform efficient string operation with joining

Strong connectivity is something that every programmer does, but mostly they do it as if the wire is taller and slowly slows down.

Before correction

Here’s how you can create a CSV string together with + operator:

def create_csv_plus(data):
    result = ""  # Start with an empty string
    for row in data:  # Go through each row of data
        for i, item in enumerate(row):  # Go through each item in the row
            result += str(item)  # Add the item to our result string
            if i < len(row) - 1:  # If it's not the last item
                result += ","     # Add a comma
        result += "\n"  # Add a newline after each row
    return result

# Test data: 1000 rows with 10 columns each
test_data = ((f"item_{i}_{j}" for j in range(10)) for i in range(1000))

start_time = time.time()
csv_plus = create_csv_plus(test_data)
plus_time = time.time() - start_time
print(f"String concatenation time: {plus_time:.4f} seconds")

This code creates a piece of our CSV string through a piece. Every row IT, it goes through everything, turns it into a wire, and adds it to our results. This includes a comma between the rows and the coma between the new lines.

After the correction

The same code using the Joining method is:

def create_csv_join(data):
    # For each row, join the items with commas, then join all rows with newlines
    return "\n".join(",".join(str(item) for item in row) for row in data)

start_time = time.time()
csv_join = create_csv_join(test_data)
join_time = time.time() - start_time
print(f"Join method time: {join_time:.4f} seconds")
print(f"Improvement: {plus_time / join_time:.2f}x faster")

This single line does a lot! The inner part ",".join(str(item) for item in row) Each row takes and joins all items with a coma. Outer "\n".join(...) He takes all the rows from the coma and is included with the new lines.

Output:

String concatenation time: 0.0043 seconds
Join method time: 0.0022 seconds
Improvement: 1.94x faster

Improvement in performance: Including the string is much faster than coincidence for big strings.

Why does it work? When you use += to connect the wires, every time you produce a new string object because the strings are incredible. With a large wire, it becomes incredibly useless. join The procedure is exactly how much memory it needs and once produces the wire.

5. Use a generator for efficient processing from memory

Sometimes you don’t need to store all your data together in memory. Generators allow you to make data on demand, which can save large quantities of memory.

Before correction

Here is that you can take action on a major datastate by storing everything in the list:

import sys

def process_large_dataset_list(n):
    processed_data = ()  
    for i in range(n):
        # Simulate some data processing
        processed_value = i ** 2 + i * 3 + 42
        processed_data.append(processed_value)  # Store each processed value
    return processed_data

# Test with 100,000 items
n = 100000
list_result = process_large_dataset_list(n)
list_memory = sys.getsizeof(list_result)
print(f"List memory usage: {list_memory:,} bytes")

It processes the function number from 0 to N-1, applying some calculations to each (square, multiplying 3, and adding 42), and saves all results in the list. The problem is that we keep all 100,000 processed values together in memory.

After the correction

The same processing using a generator is here:

def process_large_dataset_generator(n):
    for i in range(n):
        # Simulate some data processing
        processed_value = i ** 2 + i * 3 + 42
        yield processed_value  # Yield each value instead of storing it

# Create the generator (this doesn't process anything yet!)
gen_result = process_large_dataset_generator(n)
gen_memory = sys.getsizeof(gen_result)
print(f"Generator memory usage: {gen_memory:,} bytes")
print(f"Memory improvement: {list_memory / gen_memory:.0f}x less memory")

# Now we can process items one at a time
total = 0
for value in process_large_dataset_generator(n):
    total += value
    # Each value is processed on-demand and can be garbage collected

The key is the difference yield Instead of append. yield The key word makes it a generator function – it produces values at a time.

Output:

List memory usage: 800,984 bytes
Generator memory usage: 224 bytes
Memory improvement: 3576x less memory

Improvement in performance: Generators can use “too much” less memory for large datases.

Why does it work? Generators use a slow diagnosis, when you ask them, they only count values. The generator object itself is small. It is only remembered where it is in the count.

Conclusion

There is no need to threaten to improve the code. As we have seen, how small changes you approach with normal programming work can make dramatic improvements in both the use of speed and memory. The key is preparing an integration to choose the right tool for everything.

Remember these basic principles: Use built -in these functions when they are present, choose the appropriate data structure for your use, avoid unnecessary frequent work, and keep in mind how to handle the memory. List understanding, set for membership testing, adding string, major datases generators are all tools that should be in the tool cut of every initial programmer. Keep learning, keep coding!

Pray Ca Is a developer and technical author from India. She likes to work at the intersection of mathematics, programming, data science, and content creation. The fields of interest and expertise include dupas, data science, and natural language processing. She enjoys reading, writing, coding and coffee! Currently, they are working with the developer community to learn and share their knowledge with the developer community by writing a lesson, how to guide, feed and more. The above resources review and coding also engages lessons.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro