Training a simple model

Machine learning appears in this syllabus because many agent systems benefit from small, local models that make fast, repeatable decisions. Before introducing complex frameworks or large models, we need a grounded sense of how classical machine learning works in everyday Python code. This lesson provides that orientation using scikit-learn, the most common entry point for practical machine learning in Python.

The role of scikit-learn in Python

Scikit-learn is a widely used Python library that provides ready-made implementations of classical machine learning algorithms. It sits on top of NumPy and focuses on consistency, clarity, and practicality rather than novelty.

In real programs, scikit-learn often plays a supporting role. It supplies simple models that can classify, score, or predict outcomes as part of a larger system. These models are usually trained once and then reused many times.

Representing training data as features and labels

Supervised machine learning relies on examples with known outcomes. Each example is split into two parts: features and a label. Features are the measurable inputs, and the label is the value we want the model to learn to predict.

In Python, features are typically represented as lists of numbers, grouped into a larger collection. Labels are stored alongside them in a separate list. The shapes matter, but the structures are plain Python data converted internally by scikit-learn.

features = [
    [1.0, 0.2],
    [2.1, 1.9],
    [0.9, 0.1],
]

labels = [0, 1, 0]

Each inner list represents one training example, and each label corresponds to the same position in the feature list.

Training a simple supervised model

Training a model means asking it to learn patterns that map features to labels. In scikit-learn, this process is explicit and method-based. We create a model object and then call its fit method with our training data.

from sklearn.linear_model import LogisticRegression

model = LogisticRegression()
model.fit(features, labels)

The model now contains learned parameters derived from the training examples. At this point, it is ready to be used for predictions.

Running a training step from a Python program

Model training is just regular Python code. It runs when the script is executed and completes as part of the program’s normal flow. There is no special runtime or interactive environment required.

If this code lives in a file named train_model.py, we run it like any other Python program.

python train_model.py

This makes training easy to automate, repeat, or integrate into larger build steps.

Inspecting basic training outcomes

After training, we can inspect simple signals that confirm the model has learned something. Many models expose attributes that summarize the training result, such as learned coefficients or intercept values.

print(model.coef_)
print(model.intercept_)

These values are rarely interpreted directly in production code, but they are useful for confirming that training occurred and produced concrete parameters.

Conclusion

We set out to understand what it looks like to train a basic machine learning model inside a normal Python program. We now know the role scikit-learn plays, how training data is represented, how a supervised model is trained, and how to observe the results of that training. With this orientation, it becomes much easier to treat machine learning as a practical tool rather than a black box.