Image: A visual representation of neural networks, the foundation of AI learning. (Source: Unsplash)
Exploring How AI Learns
In this post, we’ll explore how artificial intelligence (AI) learns using a basic Python example. We’ll simulate a simple AI that learns to predict whether a number is even or odd—a tiny step toward understanding machine learning. The AI will use a concept called a perceptron, which is like a single neuron in a neural network. It takes an input (a number), processes it with a weight and bias, and makes a prediction. Over time, it adjusts these values based on errors to “learn” the pattern.
What We’ll Do
Here’s the plan:
- Create a dataset of numbers labeled as even (0) or odd (1).
- Build a perceptron with a random weight and bias.
- Train it by tweaking the weight and bias when it makes mistakes.
- Test it on new numbers.
The code below implements this idea. It’s simple, but it shows the core of AI learning: adjusting based on feedback. Let’s dive in!

Image: Coding in Python—our tool for building this AI perceptron. (Source: Pexels)
The Python Code
Below is the Python code to create and train our perceptron. It’s a basic example, but it captures the essence of how AI learns through trial and error.
# [Insert your Python code here]
# Example structure:
# 1. Import libraries (e.g., numpy for calculations)
# 2. Create a dataset (e.g., numbers and labels: 2 -> 0, 3 -> 1)
# 3. Define the Perceptron class with weight, bias, and activation function
# 4. Train the perceptron with a learning loop
# 5. Test it with new inputs
# Sample placeholder:
import numpy as np
class Perceptron:
def __init__(self):
self.weight = np.random.rand()
self.bias = np.random.rand()
def predict(self, x):
return 1 if (x * self.weight + self.bias) > 0 else 0
def train(self, x, y, learning_rate=0.1):
prediction = self.predict(x)
error = y - prediction
self.weight += learning_rate * error * x
self.bias += learning_rate * error
# Dataset: [number, label] (even=0, odd=1)
data = [[2, 0], [3, 1], [4, 0], [5, 1]]
perceptron = Perceptron()
# Training loop
for _ in range(10): # 10 epochs
for x, y in data:
perceptron.train(x, y)
# Test
print(perceptron.predict(6)) # Should output 0 (even)
print(perceptron.predict(7)) # Should output 1 (odd)
Note: This is a simplified example. Replace this with your full code if you have a specific implementation!
How It Works
The perceptron starts with random guesses (weight and bias). For each number in our dataset, it predicts “even” (0) or “odd” (1). If it’s wrong, it adjusts the weight and bias based on the error—gradually learning the pattern. After training, it can classify new numbers. This feedback loop is the heartbeat of machine learning!
Image: Visualizing data processing—how the perceptron refines its predictions. (Source: Unsplash)
Why It Matters
This tiny perceptron is a building block for bigger things. Stack thousands of them into a neural network, and you get the AI powering chatbots, image recognition, and more. It’s a small step, but it shows how machines learn from data—just like we learn from experience.
Want to try it yourself? Tweak the code, test new numbers, or share your thoughts below!
# Import random for initial values
import random
# Step 1: Create a small dataset (number, label: 0 for even, 1 for odd)
dataset = [(2, 0), (3, 1), (4, 0), (5, 1), (6, 0)]
# Step 2: Perceptron class to predict and learn
class Perceptron:
def __init__(self):
self.weight = random.uniform(-1, 1) # Random starting weight
self.bias = random.uniform(-1, 1) # Random starting bias
self.learning_rate = 0.1 # How fast it learns
def predict(self, number):
# Simple calculation: if weight * number + bias >= 0, predict 1 (odd), else 0 (even)
result = self.weight * number + self.bias
return 1 if result >= 0 else 0
def train(self, number, target):
prediction = self.predict(number)
error = target - prediction # Difference between actual and predicted
# Adjust weight and bias based on error
self.weight += self.learning_rate * error * number
self.bias += self.learning_rate * error
# Step 3: Create and train the perceptron
ai = Perceptron()
print("Before training:")
for number, target in dataset:
print(f"Number: {number}, Predicted: {ai.predict(number)}, Actual: {target}")
# Train for 10 rounds (epochs)
for _ in range(10):
for number, target in dataset:
ai.train(number, target)
# Step 4: Test after training
print("\nAfter training:")
for number, target in dataset:
print(f"Number: {number}, Predicted: {ai.predict(number)}, Actual: {target}")
# Test on a new number
test_number = 7
print(f"\nTesting new number {test_number}: Predicted {ai.predict(test_number)} (Actual: 1)")
OUTPUT:

When you run this code, you’ll see the AI start with random guesses. Before training, its predictions might be all over the place. After 10 rounds of training, it should get better at predicting whether a number is even (0) or odd (1). For example, it might correctly guess that 7 is odd (1) after learning from the dataset.
This is a simplified version of how AI learns. Real machine learning uses larger datasets, more complex models (like neural networks with many neurons), and advanced math (like gradient descent). But the core idea—adjusting based on mistakes—is the same. Try tweaking the code! Change the dataset, learning rate, or number of training rounds to see how it affects the AI’s accuracy.