Backpropagation is one of the several ways in which an artificial neural network (ANN) can be trained. It is a supervised training scheme, which means, it learns from labeled training data. In simple terms, BackProp is like “learning from mistakes. “The supervisor corrects the ANN whenever it makes mistakes.” Initially, all the edge weights are randomly assigned. For every input in the training dataset, the ANN is activated, and its output is observed. This output…

In linear algebra, functional analysis, and related areas of mathematics, a norm (l-norms) is a function that assigns a strictly positive length or size to each vector in a vector space—save for the zero vector, which is assigned a length of zero. A seminorm, on the other hand, is allowed to assign zero length to some non-zero vectors (in addition to the zero vector). L1-Norm loss function is known as least absolute deviations (LAD). $latex…

An array $latex a $ is called beautiful if for every pair of numbers $latex a_i $, $latex a_j $, ($latex i \neq j $), there exists an $latex a_k $ such that $latex a_k = a_i * a_j $. Note that $latex k $ can be equal to $latex i $ or $latex j $ too. Input First line of the input contains an integer T denoting the number of test cases. T test cases…

The 3020 CNC router is a popular Chinese machine for a good reason. For around 500EUR, you can have a simple CNC with a rigid all-aluminum frame, three decent stepper motors, and pretty good resolution. However, a major disadvantage of this CNC is the out-of-date electronic parts. The motor controller is using a parallel port which is not available in any new computer(PC). For that reason, it is recommended to upgrade the motor controller to…

In a previous post we saw the differences between K-means and K-NN. Here is step by step on how to compute K-nearest neighbors KNN algorithm. Determine parameter K = number of nearest neighbors Calculate the distance between the query-instance and all the training samples Sort the distance and determine nearest neighbors based on the K-th minimum distance Gather the category of the nearest neighbors Use simple majority of the category of nearest neighbors as the prediction…

In short, the algorithms are trying to accomplish different goals. K-nearest neighbor is a subset of supervised learning classification (or regression) algorithms (it takes a bunch of labeled points and uses them to learn how to label other points). It is supervised because you are trying to classify a point based on the known classification of other points. In contrast, K-means is a subset of unsupervised learning clustering algorithms (it takes a bunch of unlabeled…

1. What is ICA ? Independent Component Analysis is a technique of separating signals from their linear mixes. We could assume two signals $ x_1(t) $ and $ x_2(t) $ that are a linear combination of two signals source $ s_1(t) $ and $ s_2(t) $, the relationships of $ x_n $ and $ s_n $ are shown in the following system of linear equations where $ a_{11} $, $ a_{12} $, $ a_{21} $ and…

Perceptron is one of the simplest forms of a neural network model. The following code snippet is a simple version of such a neural network using Matlab. Before using it please read some background information at wikipedia : https://en.wikipedia.org/wiki/Perceptron

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
clc; clear all; close all; % Data inputs=[1 0 0; 1 0 1;1 1 0;1 1 1]; % Desired Output desiredOutput=[1 1 1 0]; % Learning Rate learningRate=0.1; % HardLimit threshold threshold=0.5; % Initial Weights weights=[0 0 0]; for j=1:200 disp(sprintf('----- Round %d -----',j)); for i=1:size(inputs,1) c1=weights(1)*inputs(i,1); c2=weights(2)*inputs(i,2); c3=weights(3)*inputs(i,3); csum=c1+c2+c3; if(csum>threshold) output(i)=1; else output(i)=0; end error=desiredOutput(i)-output(i); correction= learningRate*error; weights(1)=weights(1)+inputs(i,1)*correction; weights(2)=weights(2)+inputs(i,2)*correction; weights(3)=weights(3)+inputs(i,3)*correction; disp(sprintf('Input [ %d %d %d ]\t\t DOut|COut [ %d ]|[ %d ]\t New Weights [ %1.1f %1.1f %1.1f ]' ,inputs(i,1),inputs(i,2),inputs(i,3) ,desiredOutput(i),output(i) ,weights(1),weights(2),weights(3))); end if isequal(output,desiredOutput)break;end end |

Image separation of mixed and overlapped images is a frequent problem in computer vision (image processing). The following Matlab source code is a demonstration of image separation using FastICA algorithm based on kurtosis.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
clc;clear all;close all; original_image1=imread ('input1.jpg'); original_image2=imread ('input2.jpg'); figure() subplot(3,3,1),imshow(original_image1),title('Original Source 1'), subplot(3,3,2),imshow(original_image2),title('Original Source 2'), [x1,y1,z1]=size(original_image1); [x2,y2,z2]=size(original_image2); s1=[reshape(original_image1,[1,x1*y1*z1])]; s2=[reshape(original_image2,[1,x2*y2*z2])]; mixtures=((0.99-0.4).*rand(2,2) + 0.4 )*double([s1;s2]); mixture_1=uint8(round(reshape(mixtures(1,:),[x1,y1,z1]))); mixture_2=uint8(round(reshape(mixtures(2,:),[x2,y2,z2]))); subplot(3,3,4),imshow(mixture_1),title('Mixture 1'), subplot(3,3,5),imshow(mixture_2),title('Mixture 2'), mixtures_origin=mixtures; mixtures_mean=zeros(2,1); for i=1:2 mixtures_mean(i)=mean(mixtures(i,:)); end for i=1:2 for j=1:size(mixtures,2) mixtures(i,j)=mixtures(i,j)-mixtures_mean(i); end end mixtures_cov=cov(mixtures'); [E,D]=eig(mixtures_cov); Q=inv(sqrt(D))*(E)'; mixtures_white=Q*mixtures; IsI=cov(mixtures_white'); [VariableNum,SampleNum]=size(mixtures_white); numofIC=VariableNum; B=zeros(numofIC,VariableNum); for r=1:numofIC i=1;maxIterationsNum=250; b=2*(rand(numofIC,1)-.5); b=b/norm(b); while i < maxIterationsNum if i == maxIterationsNum fprintf('No convergence - %d - %d', r,maxIterationsNum); break; end b_prev=b; t=mixtures_white'*b; G = 4 * t.^3; Gp = 12 * t.^2; b=(t'*G*b+mixtures_white*G)/SampleNum-mean(Gp)*b; b=b-B*B'*b; b=b/norm(b); if abs(abs(b'*b_prev)-1) < 1e-10 B(:,r)=b; break; end fprintf(1,'%d | %d - %d\n',r,i,abs(abs(b'*b_prev)-1)); i=i+1; end end ICAedS=abs(55*(B'*Q*mixtures_origin)); original_image1_icaed =uint8 (round(reshape(ICAedS(1,:),[x1,y1,z1]))); original_image2_icaed =uint8 (round(reshape(ICAedS(2,:),[x2,y2,z2]))); subplot(3,3,7),imshow(original_image1_icaed),title('Restored Image 1'); subplot(3,3,8),imshow(original_image2_icaed),title('Restored Image 2'); |

TiIf a program requires measuring elapsed time, you will need an individual timer that will be independent even if the user changes the time on the system clock. In Linux there are several different implementations for different cases (https://linux.die.net/man/3/clock_gettime): CLOCK_REALTIME System-wide realtime clock. Setting this clock requires appropriate privileges. CLOCK_MONOTONIC Clock that cannot be set and represents monotonic time since some unspecified starting point. CLOCK_PROCESS_CPUTIME_ID High-resolution per-process timer from the CPU. CLOCK_THREAD_CPUTIME_ID Thread-specific CPU-time clock.…

The aim of this article is to detect the edges with a given direction in an image. To that end create a function [ E ] = oriented_edges( I, thr, a, da ) that takes as input a double grayscale image Ι, a threshold value thr, a direction a, and an angle da. The output of the function is a binary image Ε where the pixels that meet the following requirements should have the value 1:…

PCA is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. Below are the steps of the algorithm:

1 2 3 4 5 6 7 8 9 |
close all; % clear all; clc; % data = load('Data/eeg.mat'); % data = data.data{1,2}; |

Step 1 – Initialize the dataset, 6 vectors of 32 sample data

1 2 3 4 5 |
% % Step 1 - Initialize the dataset, 6 vectors of 32 sample data % X = data(1:32,1:6); |

Step 2 – Subtract the mean from each of the data dimensions. The mean subtracted is the average across each dimension. [math]Y= X – (O * Mean(X)) [/math], where [math]O…