0. Sügavõpe (Deep learning)

Märkmed fast.ai kursusest enda jaoks. Ei ole ülevaatlik ega põhjalik kõikides osades.

Originaal märkmik koos inglisekeelsete põhjalike selgitavate tekstiga: 01.intro.ipynb

  • Vajalik on NVIDIA graafikakaart (GPU – Graphics Processing Unit). Need, mis on head mängimiseks ja 3D jaoks sobivad üldiselt ka sügavõppe jaoks. Google Colabis tuleb GPU kasutamine eraldi aktiveerida.
  • Keskkonnaks Jupyter notebook. Kas oma arvutis või Google Colab, Gradient jne.
  • Koodi käivitamine lahtris: Shift+Enter
  • Igal mudelil on kaks sisendit: 1. andmed (inputs) ja 2. mudeli parameetrid/ kaalud (weights, model parameters).
Mudeli treenimine (vanem terminoloogia)
Mudeli treenimine kaasaegsetes terminites
  • Treenitud mudel on nagu tavaline arvutiprogaamm.
Treenitud mudel
  • Mudelit ei saa luua ilma andmeteta.
  • Andmed peavad olema sildistatud (labels)
  • Mudel väljastab ennustuse/tõenäosuse (predictions) 0-100%
  • classification – ennustab klassi või kategooriat. Näiteks: kass, koer jne.
  • regression – ennustab numbrilist väärtust. Näiteks: temperatuur, asukoht
  • Andmehulk jagatakse valideerimis (validation set) 20% ja treening hulgaks (training set) 80%.
  • Vältida mudeli ülesobitumist (overfitting), kus mudel jätab meelde konkreetsed andmed ja ei üldistu enam uutele andmetele.
  • pretrained model – Eeltreenitud mudel on mudel mida on juba treenitud mingil teisel andmehulgal. Enamusel juhtudel on soovitatv kasutada eeltreenitud mudeleid. Kuna need on juba enne meie andmete sissesöötmist väga võimekad.
  • Pildituvastus mudeleid saab kasutada ka, esmapilgul , mitte pildiliste andmete jaoks. Nagu heli mille saab muuta spektrogrammiks. Aegridu saab muuta graafikuks. Arvuti hiire liikumist matil värvilisteks joonteks jne.

Lühikokkuvõte masinõppest

  • Masinõpe on olukord, kus me ei kirjuta programmi loogikat ise algusest peale, vaid programmi osa õpib ise loogika andmete pealt.
  • Sügavõpe (deep learning) on närvivõrk (neural network) paljude kihtidega (layers). Pildi klassifikatsioon või pildi äratundmine on tüüpiline näide. Alustuseks on sildistatud (labeled) andmehulk st. igal pildil on silt mida see kujutab. Eesmärk on programm ehk mudel, millele andes uue pildi tagastab ennustuse (predictioni) selle kohta mida see kujutab.
  • Iga mudel algab arhitektuuri valimisega. Arhitektuur on üldine mall selle kohta, kuidas see mudel sisemiselt töötab.
  • Treenimine (training or fitting) on protsess, arhitektuurist tulenevate, parameetrite väärtuste (parameter values or weights) kogumi leidmiseks, mis sobiks konkreetselt meie andmete jaoks.
  • Selleks, et määrata, kui hästi mudel töötab ühe ennustusega peame määrama kahju funktsiooni (loss function), mis määrab, kui heaks või halvaks me hindame ennustust.
  • Et treening protsess oleks kiirem saame kasutada eeltreenitud (pretrained) mudelit. Mudel mida on juba treenitud kellegi teise andmete peal. Peame seda ainult natuke treenima oma andmete peal (fine-tuning).
  • Mudeli treenimisel on oluline, et mudel üldistuks (generalize) hästi. Et ta töötaks hästi uutel andmetel, mida ta pole varem näinud. Ülesobitumine (overfitting) on olukord, kus mudel töötab väga hästi treening andmetel aga mitte uutel andmetel. Mudel nö õppinud pähe konkreetsed andmed.
  • Selle vältimiseks jagatakse andmed kaheks: treening (training set) ja valideerimisandmestikuks (validation set).
  • Selleks, et inimene saaks hinnata, kui hästi mudelil valideerimis lähem määrame mõõdiku (metric).
  • Kui mudel on treeningu käigus kõiki treening andmeid näinud – kutsutakse seda epohhiks (epoch).

Kasside ja koerte äratundmine

Colab Märkmik kogu koodiga.

  • Kasutab Oxford ülikooli poolt koostatud kasside-koerte andmehulka Oxford-IIIT Pet Dataset.
  • Kasutame mudelit, mida on juba treenitud 1,3 miljoni pildiga (pretrained model).
  • Toimub eeltreenitud mudeli viimistlemine ja kohandamine spetsiaalselt kasside ja koerte piltide äratundmiseks. Kasutades selleks ülekandeõpet (transfer learning).

0.

0. Et fastai töötaks Google Colabis on vajalik kõigepealt:

!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
from fastbook import *
  1. Lisame fastai.vision teegi:
from fastai.vision.all import *

2. Andmehulk. Laeb alla standard andmehulga (dataset), pakib lahti ja tagastab selle asukoha (path).

path = untar_data(URLs.PETS)/'images'

3. Abifunktsioon. Sildistab kasside pildid faili nimejärgi. Andekogumi loojate poolt loodud reeglialusel.

def is_cat(x):
  # Tagastab True, kui esimene täht on suurtäht st. on kass
  return x[0].isupper()

Ütleb fastaile, milline andekogum meil on ja kuidas see on struktureeritud. 224px on ajalooline standard ja paljud vanemad mudelid nõuavad selles suuruses pilte. Suuremad pildid võivad anda paremaid tulemusi, kuna pildile jääb rohkem detaile. Aga selle hinnaks on suurenev töötlusaeg ja vajaminev mäluhulk.

dls = ImageDataLoaders.from_name_func(
    path,
    get_image_files(path),
    valid_pct=0.2,        # Jätab 20% andmetest valideerimiseks
    seed=42,              # Määrab juhusliku seemne samale väärtusele igal koodi käivitamisel
    label_func=is_cat,    # Sildistamine
    item_tfms=Resize(224) # Iga pilt muudetakse 224 piksli suuruseks ruuduks 
)

4. Treenime mudeli.

Närvivõrgu tüüp: convolutional neural network (CNN). On praegu kõige populaarsem masinnägemis mudelite loomisel. Inspireeritud inimese nägemissüsteemi toimimisest.

Närvivõrgu arhitektuur: ResNet, 34 – kihtide (layers) arv. Kihtide arv võib olla veel 18, 50, 101 ja 152. Mida rohkem kihte seda kauem võtab treenimine aega ja seda suurem on oht ülesobitumisele (overfitting). Kui andmeid on vähem on vaja ka vähem kihte ja vastupidi.

metrics on funktsioon, mis möödab ennustuse kvaliteeti igal epochil. Antud juhul: error_rate – tagastab valesti ennustatud piltide protsendi. Teise võimalus accuracy, mis tagastab: 1.0 – error_rate

pretrained – kui me eraldi ei määra on True. Nagu antud juhul. On eeltreenitud ImageNet andmehulgal, mis sisaldab üle 1,4 miljoni pildi.

Kui kasutada eeltreenitud mudelit, siss cnn_learner eemaldab mudeli viimase kihi ja asendab selle uue kihi või kihtidega, mis on kohandatud uute andmete jaoks. Viimast osa kutsutakse, ka peaks (head).

Eeltreenitud mudeli (pretrained model) kasutamist teise ülesande jaoks, kui see oli algselt treenitud tuntakse ka kui siirdeõpe/ülekandeõpe (transfer learning).

fine_tune – sobitab (fit) mudeli. Epohhide (epochs) arv. Kui mitu korda igat pilti vaadatakse. Kasutakse siirdeõpe puhul mudeli parameetrite uuendamiseks.

learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1) # fit model

5. Katsetame oma pildiga.

Loob pildi üles laadimise:

uploader = widgets.FileUpload()
uploader
img = PILImage.create(uploader.data[0])
is_cat,_,probs = learn.predict(img)
print(f"Kas see on kass?: {is_cat}.")
print(f"Tõenäosus: {probs[1].item():.6f}")

Terminid

Term Meaning
Label The data that we’re trying to predict, such as "dog" or "cat"
Architecture The template of the model that we’re trying to fit; the actual mathematical function that we’re passing the input data and parameters to
Model The combination of the architecture with a particular set of parameters
Parameters The values in the model that change what task it can do, and are updated through model training
Fit Update the parameters of the model such that the predictions of the model using the input data match the target labels
Train A synonym for fit
Pretrained model A model that has already been trained, generally using a large dataset, and will be fine-tuned
Fine-tune Update a pretrained model for a different task
Epoch One complete pass through the input data
Loss A measure of how good the model is, chosen to drive training via SGD
Metric A measurement of how good the model is, using the validation set, chosen for human consumption
Validation set A set of data held out from training, used only for measuring how good the model is
Training set The data used for fitting the model; does not include any data from the validation set
Overfitting Training a model in such a way that it remembers specific features of the input data, rather than generalizing well to data not seen during training
CNN Convolutional neural network; a type of neural network that works particularly well for computer vision tasks

Fastai

Kuvamaks teavet mõne fastai funktsiooni kohta näiteks: learn.predict:

doc(learn.predict)

Lingid

Malevich

This artwork is inspired by famous suprematist artist Kazimir Malevich (1879-1935) and his painting “Black Square”.

Kazimir Malevich, 1915, Black Suprematic Square (Wikipedia)

But I am not so radical. I have four LED squares inside one black square. Slowly blinking their different coloured lights.

I need to make another video when there is not so light and you can actually see the colours.

Links

The artistic shape detection algorithm

Today I learned one simple shape detection algorithm. In contrast image, it tries to find how many corners on some shapes are. When there is three, then it is a triangle and so on. And draws coloured contour around it. In a controlled environment, it works mostly as supposed. When you released it to the real wild world it gives quite artistic results.

#!/usr/bin/env python3

'''
Shape detection from images.
Tauno Erik
13.05.2021
'''

import cv2 as cv
import os

# Colors (BGR)
RED = (0,0,255)
GREEN = (0,255,0)
BLUE = (255,0,0)
YELLOW = (0,255,255)
CYAN = (255,255,0)
MAGENTA = (255,0,255)
ORANGE = (0,140,255)
PINK = (147,20,255)
PURPLE = (128,0,128)
GOLDEN = (32,165,218)
BROWN = (42,42,165)

def full_path(filename):
  ''' Returns full path to file. '''
  folder = os.path.dirname(__file__) # File location
  full_path = os.path.join(folder, filename)
  return full_path

def shape_detection(file):
  img = cv.imread(file)
  gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
  ret, thresh = cv.threshold(gray, 50, 255, 1)
  contours, h = cv.findContours(thresh, 1, 2)

  img_hall = cv.cvtColor(gray, cv.COLOR_GRAY2BGR)

  for cnt in contours:
    approx = cv.approxPolyDP(cnt, 0.01*cv.arcLength(cnt, True), True)  # Returns array
    print("Shape with {} pints".format(len(approx)))
    n = len(approx)

    if n is 15 or n is 16:
      cv.drawContours(img_hall, [cnt], 0, YELLOW, 5)
    elif n is 12:
      cv.drawContours(img_hall, [cnt], 0, BROWN, 5)
    elif n is 11:
      cv.drawContours(img_hall, [cnt], 0, GOLDEN, 5)
    elif n is 10:
      cv.drawContours(img_hall, [cnt], 0, PURPLE, 5)
    elif n is 9:
      cv.drawContours(img_hall, [cnt], 0, PINK, 5)
    elif n is 8:
      cv.drawContours(img_hall, [cnt], 0, CYAN, 5)
    elif n is 7:
      cv.drawContours(img_hall, [cnt], 0, ORANGE, 5)
    elif n is 6:
      cv.drawContours(img_hall, [cnt], 0, CYAN, 5)
    elif n is 5:
      cv.drawContours(img_hall, [cnt], 0, RED, 5)
    elif n is 4:
      cv.drawContours(img_hall, [cnt], 0, GREEN, 5)
    elif n is 3:
      cv.drawContours(img_hall, [cnt], 0, BLUE, 5)

  cv.imshow('Shaps', img_hall)
  cv.waitKey(0)


if __name__ == "__main__":
  print('Shape detection!')
  print('To close window press: q')

  file = full_path('images/kujundid.jpg')
  shape_detection(file)

Motion detection on the webcam

It is surprisingly easy to make a small Python script that takes a webcam or any other video and detects when something is moving there. It uses the OpenCV library.

1. Difference between frames

Compares two frames and displays only what are change. The rest is black.

import cv2

# Select camera. Usualy 0, or 1 and so on
cam = cv2.VideoCapture(0)

try:
	while cam.isOpened():
		ret, frame1 = cam.read()
		ret, frame2 = cam.read()
		diff = cv2.absdiff(frame1, frame2)
    
		# To exit press 'q'    
		if cv2.waitKey(10) == ord('q'):
			break
    	
		# Display
		cv2.imshow('Erinevus', diff)
except:
	print("Error.")

2. Binary image

Turn it into binary: only black and white. To make it easy to find contours.

import cv2

# Select camera. Usualy 0, or 1 and so on
cam = cv2.VideoCapture(0)

try:
	while cam.isOpened():
		ret, frame1 = cam.read()
		ret, frame2 = cam.read()
		# Compare frames
		diff = cv2.absdiff(frame1, frame2)
		# Convert diff to grayscale image
		gray = cv2.cvtColor(diff, cv2.COLOR_RGB2GRAY)
		# Blur gray image
		blur = cv2.GaussianBlur(gray, (5, 5), 0)
		# Converts to Binary images. Only black and white colour.
		_, thresh = cv2.threshold(blur, 20, 255, cv2.THRESH_BINARY)
		# Expand moving image part
		dilated = cv2.dilate(thresh, None, iterations=3)
    
		# To exit press 'q'    
		if cv2.waitKey(10) == ord('q'):
			break
    	
		# Display
		cv2.imshow('Erinevus', dilated)
except:
	print("Error.")

3. Contours

Now displays founded contours over the original image.

import cv2

# Select camera. Usualy 0, or 1 and so on
cam = cv2.VideoCapture(0)

try:
	while cam.isOpened():
		ret, frame1 = cam.read()
		ret, frame2 = cam.read()
		# Compare frames
		diff = cv2.absdiff(frame1, frame2)
		# Convert diff to grayscale image
		gray = cv2.cvtColor(diff, cv2.COLOR_RGB2GRAY)
		# Blur gray image
		blur = cv2.GaussianBlur(gray, (5, 5), 0)
		# Converts to Binary images. Only black and white colour.
		_, thresh = cv2.threshold(blur, 20, 255, cv2.THRESH_BINARY)
		# Expand moving image part
		dilated = cv2.dilate(thresh, None, iterations=3)
		# Find moving part contures
		contours, _ = cv2.findContours(dilated, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
		# Draw contours
		cv2.drawContours(frame1, contours, -1, (0, 255, 0), 2) #
    
		# To exit press 'q'    
		if cv2.waitKey(10) == ord('q'):
			break
    	
		# Display
		cv2.imshow('Erinevus', frame1)
except:
	print("Error.")

Rectangle

When we know where the contours are. Where are the coordinates of the beginning on the x and y axes. We can draw rectangles around these regions.

import cv2

# Select camera. Usualy 0, or 1 and so on
cam = cv2.VideoCapture(0)

try:
	while cam.isOpened():
		ret, frame1 = cam.read()
		ret, frame2 = cam.read()
		# Compare frames
		diff = cv2.absdiff(frame1, frame2)
		# Convert diff to grayscale image
		gray = cv2.cvtColor(diff, cv2.COLOR_RGB2GRAY)
		# Blur gray image
		blur = cv2.GaussianBlur(gray, (5, 5), 0)
		# Converts to Binary images. Only black and white colour.
		_, thresh = cv2.threshold(blur, 20, 255, cv2.THRESH_BINARY)
		# Expand moving image part
		dilated = cv2.dilate(thresh, None, iterations=3)
		# Find moving part contures
		contours, _ = cv2.findContours(dilated, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

		for c in contours:
			# Select movement size area.
			# If contour is smaller it will be ignored.
			if cv2.contourArea(c) < 2000:
				continue
			# Contour position and size
			x, y, w, h = cv2.boundingRect(c)
			# Draw rectangle
			cv2.rectangle(frame1, (x, y), (x+w, y+h), (0, 255, 0), 2)
			# To something
    
		# To exit press 'q'    
		if cv2.waitKey(10) == ord('q'):
			break
    	
		# Display
		cv2.imshow('Liikumine', frame1)
except:
	print("Error.")

Demo video

Custom Wooden Mechanical Keyboard

I had one old and cheap rubber dome keyboard with missing keys. So I take it apart and found this controller PCB board. The board pads were coated with graphite. I removed it and soldered wires on it. And put it on a breadboard. Then I mapped all rows and column combinations to find what scancode they output. For this, I have the python script to display which key is pressed.


Then I designed PCB for one switch. It is with breadboard friendly layout and uses through-hole components. And made a plywood mounting plate, stained it black and lacquered. I connected the buttons through the mounting plate to the circuit boards.


Then I figured out which buttons I wanted and soldered right C and R wires in the right places.


I also made a plywood case.


The next part was keycaps. I could use plastic ones. But I wanted them to be symmetrical and with symbols on them. So again I made them from plywood. The top layer is solid oak. Other is birch tree plywood. Laser cutting them and glued together. One part is 3D printed. Cross shape part that connects keycaps to switches. The hardest part was sanding the keycaps to the right shape. I did it by hand, but it should be mechanized process. And also my keycaps are larger than normal keycaps on the keyboard.


All wooden parts are finished with Liberon Black Bison Antikvax.

Links:

How i made my digital radio

This is my simple one-button radio. One button to turn it on and change the volume. There are actually two buttons more: first to select a new channel and second one to save it to memory.

It was a project that taught me how to draw PCBs, what are Gerber files and so on. I designed 4 different layouts, orders two. The first one had some noise problems. So I added some filters and ordered the second one.

It works and I am happy with it mostly. I do not have any education in electronics, so forgive me if it does not meet professional standards.

It is now my second stop motion animation. It took about a week to do. It could be better if I put more time into it.

Schematics

It uses Arduino Nano as the main control unit.

Code

Old (Ancient Egyptian) solid wood items

Ancient Egyptian wooden furniture. I love this kind of old wood items: aged and natural. I especially chose the ones that would not be overpainted and decorated.

Link: https://collections.louvre.fr/en/ark:/53355/cl010017034

Link: https://collections.louvre.fr/en/ark:/53355/cl010018558

Link: https://collections.louvre.fr/en/ark:/53355/cl010024942

Link: https://collections.louvre.fr/en/ark:/53355/cl010007843

Link: https://collections.louvre.fr/en/ark:/53355/cl010006806

Link: https://collections.louvre.fr/en/ark:/53355/cl010010764

Link: https://collections.louvre.fr/en/ark:/53355/cl010010823

Link: https://collections.louvre.fr/en/ark:/53355/cl010008109

Link: https://collections.louvre.fr/en/ark:/53355/cl010036860

Link: https://collections.louvre.fr/en/ark:/53355/cl010008763

Link: https://collections.louvre.fr/en/ark:/53355/cl010008450

Link: https://collections.louvre.fr/en/ark:/53355/cl010008519

Link: https://collections.louvre.fr/en/ark:/53355/cl010007198

Link: https://collections.louvre.fr/en/ark:/53355/cl010037301

Link: https://collections.louvre.fr/en/ark:/53355/cl010006589

Link: https://collections.louvre.fr/en/ark:/53355/cl010007030

Link: https://collections.louvre.fr/en/ark:/53355/cl010007473

Link: https://collections.louvre.fr/en/ark:/53355/cl010029163

Link: https://collections.louvre.fr/en/ark:/53355/cl010034013

Link: https://collections.louvre.fr/en/ark:/53355/cl010011663

Link: https://collections.louvre.fr/en/ark:/53355/cl010011263

Link: https://collections.louvre.fr/en/ark:/53355/cl010006782

Kõige tavalisem värv pildil

On juhtumid, kus meid ei huvita üksikute pikslite värv. Vaid tahame üldist summeeritud keskmist. Näiteks põllumajanduses saab värvipõhjal hinnata puu või köögivilja küpsusastet.

Alustuseks laeme vajalikud teegid:

import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
import PIL
from skimage import io
%matplotlib inline

#Kõik pildid on siin kataloogis
kataloog = 'https://raw.githubusercontent.com/taunoe/jupyter-notebooks/main/Pildi-anal%C3%BC%C3%BCs/images/'

Teeme funktsiooni, mis aitab näidata kahte pilti kõrvuti.

def show_img_compar(pilt_1, pilt_2 ):
    f, ax = plt.subplots(1, 2, figsize=(10,10))
    ax[0].imshow(pilt_1)
    ax[1].imshow(pilt_2)
    ax[0].axis('on')  # Kuva koordinaatteljestik
    ax[1].axis('off') # Peida koordinaatteljestik
    f.tight_layout()
    plt.show()

Valime pildid:

#pilt_1 = cv.imread(kataloog + 'tamm.jpg') # annab errori
pilt_1 = io.imread(kataloog + 'tamm.jpg')
#pilt_1 = cv.cvtColor(pilt_1, cv.COLOR_BGR2RGB) # reastab BGR kihid ümber RGBks
pilt_2 = io.imread(kataloog + 'sinie.jpg')
#pilt_2 = cv.cvtColor(pilt_2, cv.COLOR_BGR2RGB)

Teeme pilte väiksemaks:

dim = (500, 300)
# Pildid väiksemaks
pilt_1 = cv.resize(pilt_1, dim, interpolation = cv.INTER_AREA)
pilt_2 = cv.resize(pilt_2, dim, interpolation = cv.INTER_AREA)

Proovime, kas piltide kuvamine töötab:

Meetod 1 – keskmine pikslite väärtus

Kõige lihtsam meetod on leida pikslite keskmised väärtused. Kasutades teegist numpy average funktsiooni leidmaks keskmise piksli väärtus.

Selline meetod võib anda ebatäpseid tulemusi. Eriti, kui pildi pinnal on suuri kontrasti (heledate ja tumedate alade) erinevusi. Tamme tekstuuri puhul on aga tulemus üsna usutav.

img_temp = pilt_1.copy()
img_temp[:,:,0], img_temp[:,:,1], img_temp[:,:,2] = np.average(pilt_1, axis=(0,1))
show_img_compar(pilt_1, img_temp)
img_temp = pilt_2.copy()
img_temp[:,:,0], img_temp[:,:,1], img_temp[:,:,2] = np.average(pilt_2, axis=(0,1))
show_img_compar(pilt_2, img_temp)
pilt_3 = io.imread(kataloog + 'muster.jpg') # impordime pildi
pilt_3 = cv.resize(pilt_3, dim, interpolation = cv.INTER_AREA) # muudame suurust
img_temp = pilt_3.copy() # teeme koopia
img_temp[:,:,0], img_temp[:,:,1], img_temp[:,:,2] = np.average(pilt_3, axis=(0,1)) # arvutame keskmise
show_img_compar(pilt_3, img_temp) # kuvame tulemused

Meetod 2 – levinuima värviga pikslid

Teine meetod on natuke täpsem, kui esimene. Loeme iga piksli väärtuse esinemise sagedust.

img_temp = pilt_3.copy()
unique, counts = np.unique(img_temp.reshape(-1, 3), axis=0, return_counts=True)
img_temp[:,:,0], img_temp[:,:,1], img_temp[:,:,2] = unique[np.argmax(counts)]
show_img_compar(pilt_3, img_temp)
Selle pildi puhul on tausta hall kõige levinum värv ja tulemus ei ole see mida ootasime.
img_temp_2 = pilt_2.copy()
unique, counts = np.unique(img_temp_2.reshape(-1, 3), axis=0, return_counts=True)
img_temp_2[:,:,0], img_temp_2[:,:,1], img_temp_2[:,:,2] = unique[np.argmax(counts)]
show_img_compar(pilt_2, img_temp_2)

Meetod 3 – levinumad värvi grupid pildil

K-keskmiste klasteradmine – jagame pikslid värvi läheduse järgi klastritesse. Ja vaatame, mis on keskmine värv klastrites.

from sklearn.cluster import KMeans

clt = KMeans(n_clusters=5) # Klastrite arv

Funktsioon värvipaletti koostamiseks.

def palette(clusters):
    width=300
    height=50
    palette = np.zeros((height, width, 3), np.uint8)
    steps = width/clusters.cluster_centers_.shape[0]
    for idx, centers in enumerate(clusters.cluster_centers_): 
        palette[:, int(idx*steps):(int((idx+1)*steps)), :] = centers
    return palette
clt_1 = clt.fit(pilt_3.reshape(-1, 3))
show_img_compar(pilt_3, palette(clt_1))
clt_2 = clt.fit(pilt_2.reshape(-1, 3))
show_img_compar(pilt_2, palette(clt_2))

Meetod 4 – levinumad värvi grupid proportsionaalselt

Sisuliselt sama, mis eelmine aga leitud värve kuvab proportsionaalselt selle levikuga. Kui mingit värvi on rohkem, siis selle ristkülik on ka suurem ja vastupidi.

Abifunktsioon värvipaletti kuvamiseks:

from collections import Counter

def palette_perc(k_cluster):
    width = 300
    height = 50
    palette = np.zeros((height, width, 3), np.uint8)
    
    n_pixels = len(k_cluster.labels_)
    counter = Counter(k_cluster.labels_) # count how many pixels per cluster
    perc = {}
    for i in counter:
        perc[i] = np.round(counter[i]/n_pixels, 2)
    perc = dict(sorted(perc.items()))
    
    #for logging purposes
    #print(perc)
    #print(k_cluster.cluster_centers_)
    
    step = 0
    
    for idx, centers in enumerate(k_cluster.cluster_centers_): 
        palette[:, step:int(step + perc[idx]*width+1), :] = centers
        step += int(perc[idx]*width+1)
        
    return palette
clt_1 = clt.fit(pilt_3.reshape(-1, 3))
show_img_compar(pilt_3, palette_perc(clt_1))
clt_2 = clt.fit(pilt_2.reshape(-1, 3))
show_img_compar(pilt_2, palette_perc(clt_2))
pilt_4 = io.imread(kataloog + 'klaster1.jpg') # Impordime pildi
pilt_4 = cv.resize(pilt_4, dim, interpolation = cv.INTER_AREA) # Pilt väiksemaks
clt_4 = clt.fit(pilt_4.reshape(-1, 3))
show_img_compar(pilt_4, palette_perc(clt_4))
pilt_5 = io.imread(kataloog + 'wermo1.png') # Impordime pildi
#pilt_5 = cv.resize(pilt_5, dim, interpolation = cv.INTER_AREA) # Pilt väiksemaks
clt_5 = clt.fit(pilt_5.reshape(-1, 3))
show_img_compar(pilt_5, palette_perc(clt_5))
pilt_7 = io.imread(kataloog + 'kevad.jpg') # Impordime pildi
#pilt_7 = cv.resize(pilt_6, (500, 500) , interpolation = cv.INTER_AREA) # Pilt väiksemaks
clt_7 = clt.fit(pilt_7.reshape(-1, 3))
show_img_compar(pilt_7, palette_perc(clt_7))

Lingid: