Wednesday, January 1, 2014

ASP.NET web site for generating test data

Have you ever wanted ASP.NET site (and it's source code) with functionality as in generatedata.com (which aims at generating test data) ? If so - just check my newest creation at codeplex. License is MIT, so you can use it as you wish. Have fun at generating test data ! Best wishes, Agnius

Sunday, September 22, 2013

Building logic gates from domino

Do you feel boring ? Try to build computer from dominos :-). It's at least theoretically possible to do it, but of course practically unfeasible. Practically it's feasible to build some primitive logic circuits from dominos. It is challenging and fun, because domino system is highly unstable so it is hard to construct 100% deterministic logic functions from it. We will try to build AND logic gate here. According to wikipedia it's easy to build OR and XOR gates like that:


So AND gate can be composed from these two gate types as wiki states by formula:
A AND B = (A OR B) XOR (A XOR B)
But here we have two problems:
  • We need many parts of dominos, because according to formula we need 2 XOR gates and 1 OR gates. Besides XOR gate is relatively big.
  • Also as wiki states that XOR gate is extremely dependent on timing. So because we have 2 XOR gates here - dependence on timing will increase too.
This means that AND gate constructed by that scheme is highly unstable and/or needs many domino parts. Of course we can try to think about new XOR gate design from the ground-up, but still we would need 3 gates and also it is not in the scope of this article (maybe we will try to construct XOR gate in some future article). So what is left ? Answer is - we will try to construct AND gate from the ground-up with better stability and with less domino parts needed. After some thinking and trial/error process I came to such AND gate design:

Well my scheme has it's own instability which has dependence on distance between dominos. But in my opinion distance can be more easily adjusted than timing between two events without any external devices ;-) So this is how my domino AND logic gate performs in live environment:
video

And here is an example schema how NOT gate could be built:
video

Have fun with domino computing ! :-)

Wednesday, September 26, 2012

DNA sequence visualization

While reading this very interesting article about history of human genome I stumbled upon a fact that we have portion of our DNA that is 3 billion years old !! That's pretty damn a big number of years for some DNA sequence to survive in between of trillions of our ancestors. That's get my attention and I wanted to see how this 3 billion year sequence looks like in visualized form. But after a short google search I didn't found a simple DNA sequence visualizer. So I've decided to code that simple DNA sequence visualizer in browser with the help of javascript. So here it is:
(by default this 3 billion years old DNA sequence shared among all living creatures is set, but you can paste your own sequence as well - hit the button to see it).

Your browser does not support the HTML5 canvas tag.

Each nucleobase in different color

So better check if your friend has this 3 billion year sequence - otherwise you may be talking with Cylon =)

Thursday, October 6, 2011

iPhone game "Pong Hau K'i" source code

Have you ever had a dream to write an iPhone board game and wondered where to start from ? Or maybe you wanted some simplistic iPhone game source code to look at and learn from ? Now you have a good opportunity to solve that problem . I've decided to publish my iPhone board game Pong Hau K'i
source code & assets. Use it for any purpose you wish - be it personal / educational or commercial use ...
HTH !

Friday, May 27, 2011

Ellipse detection in image by using Hough transform

How we can detect ellipses in images ? One way is to use Hough transform. I will use Hough transform algorithm variant created by Yonghong Xie and Qiang Ji. That algorithm pseudo-code:

1. Store all edge pixels in a one dimensional array.
2. Clear the accumulator array.
3. For each pixel (x1, y1), carry out the following steps from (4) to (14).
   4. For each other pixel (x2, y2), if the distance between
   (x1, y1) and (x2, y2) is greater than the required least
   distance for a pair of pixels to be considered then
   carry out the following steps from (5) to (14).
      5. From the pair of pixels (x1, y1) and (x2, y2) calculate the center,
      orientation and length of major axis for the assumed ellipse.
      6. For each third pixel (x, y), if the distance between
      (x, y) and (x0, y0) is greater than the required least
      distance for a pair of pixels to be considered then
      carry out the following steps from (7) to (9).
         7. Calculate the length of minor axis.
         8. Increment the accumulator for this length of minor axis by 1.
         9. Loop until all pixels are computed for this pair of pixels.
      10. Find the maximum element in accumulator array. The
      related length is the possible length of minor axis
      for assumed ellipse. If the vote is greater than the
      required least number for assumed ellipse, one
      ellipse is detected.
      11. Output ellipse parameters.
      12. Remove the pixels on the detected ellipse from edge pixel array.
      13. Clear accumulator array.
      14. Loop until all pairs of pixels are computed.

Proof-of-concept algorithm implementation in Python:
import sys
from PIL import Image,ImageFilter, ImageDraw
from math import *

# some global constants
EL_COVERAGE_RATIO = 0.9
EL_VERIFICATION_DISTANCE = 1.
EL_PATH_POINTS = 51
MIN_MINOR_FREQUENCY = 30
MIN_HALF_MAJOR = 8
MIN_HALF_MINOR = 6

def distance(p1,p2):
 x1,y1 = p1
 x2,y2 = p2
 return sqrt((x1-x2)**2 + (y1-y2)**2)

def nonnegative(v):
 return v if v >= 0 else 0

def parametricEllipse(center, a, b, angle):
 xc,yc = center
 elx = lambda t: xc + a * cos(t) * cos(angle) - b * sin(t) * sin(angle)
 ely = lambda t: yc + a * cos(t) * sin(angle) + b * sin(t) * cos(angle)
 return [(int(elx(2.*pi*x/float(EL_PATH_POINTS-1))),int(ely(2.*pi*x/float(EL_PATH_POINTS-1)))) for x in range(EL_PATH_POINTS)]

assert len(sys.argv)!=2, "missing input and/or output file !"

im = Image.open(sys.argv[1])
width, height = im.size
io = Image.new('RGB',(width, height),(255,255,255))
draw = ImageDraw.Draw(io)

# converting image to grayscale
im = im.convert('L')

# detecting edge pixels
im = im.filter(ImageFilter.FIND_EDGES)

# converting to binary image
im = im.convert('1')
pixels = im.load()
pxy = []

# extracting binary pixels coordinates
for x in range(width):
 for y in range(height):
  if pixels[x,y]==255:
   pxy.append((x,y))

# applying Hough transform for ellipses detection.
# algorithm is taken from this paper:
# http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.8792&rep=rep1&type=pdf
cIx = -1
colors = [(255,0,0),(0,200,0),(0,0,255)]
for x1,y1 in pxy:
  for x2,y2 in pxy:
    bbins = {}
    dist = distance((x1,y1),(x2,y2))
    if dist >= 2*MIN_HALF_MAJOR:
     cent = ((x1+x2)/2.,(y1+y2)/2.)
     a = dist/2. # semi-length of major axis
     alfa = atan2((y2 - y1),(x2 - x1))
     for rx,ry in pxy:
      d = distance((rx,ry),cent)
      if d >= MIN_HALF_MINOR:
       f = distance((rx,ry),(x2,y2))
       cost = (a**2. + d**2. - f**2.)/(0.00001+2.*a*d)
       b = sqrt(nonnegative((a**2. * d**2. * (1.-cost**2.))/(0.00001 + a**2. - d**2. * cost**2.)))  # semi-length of minor axis
       b = int(b)
       if bbins.has_key(b):
        bbins[b]+=1
       elif b > 0:
        bbins[b]=1
     bbins_rev = dict([(v,k) for k,v in bbins.iteritems()])
     max_freq = max(bbins_rev.keys())
     bmax = bbins_rev[max_freq]
     # Did we found probable ellipse ?
     if max_freq >= MIN_MINOR_FREQUENCY and alfa >=0.0 and bmax >= MIN_HALF_MINOR:
      elData = parametricEllipse(cent, a, bmax, alfa)
      supported = []
      supportRatio = 0.0
      # counting how much pixels lies on ellipse path
      for i in range(EL_PATH_POINTS):
       elx,ely = elData[i]
       added = False
       for x,y in pxy:
        if distance((elx,ely),(x,y)) <= EL_VERIFICATION_DISTANCE:
         supported.append((x,y))
         if not added:
          supportRatio += 1./float(EL_PATH_POINTS)
          added = True
      supported = list(set(supported))
      # if number of pixels on ellipse path is big enough
      if supportRatio >= EL_COVERAGE_RATIO:
       cIx = (cIx+1)%3
       print "coverage %.2f" % supportRatio,"frequency ", max_freq, "center ", cent, "angle %.2f" % alfa, "axes (%.2f,%.2f)" % (a, bmax)
       # removing founded ellipse pixels from further analysis
       for p in supported:
        pxy.remove(p)
       # drawing founded ellipse
       for i in range(EL_PATH_POINTS):
        elx,ely = elData[i]
        if i < EL_PATH_POINTS-1:
         draw.line(elData[i] + elData[i+1], fill=colors[cIx])
io.save(sys.argv[2])
print "***************************************************************"
print "************************** DONE *******************************"
print "***************************************************************"

(Prototype algorithm is slow, tested only on 50x50 images). So, by running this algo on this image:
we will get such algorithm output:

Have fun in computer vision !

Friday, May 6, 2011

Gradient transfer function

Suppose we need to draw linear gradient, but in a way which lets us to control color distribution between gradient parts. How to do that ? Answer is - gradient transfer function.

Algorithm is this:
1. Extract pixel's relative distance [0..1] from the start of gradient.
2. Update this distance by feeding it to gradient transfer function.
3. Blend source color with target color using updated distance as blend ratio.
4. Set calculated color to pixel.

We will use such gradient transfer function:
where x is pixel's relative distance from the start and a,b are some adjustable parameters.

Below is Javascript implementation of this method (your browser must support HTML5 canvas element). You can try to change a,b parameters of transfer function and see what happens to gradient.



                 
a 0.5


                 

b 5.00


     
Sorry, canvas not supported!





And here is Javascript code which does that (plot is generated with FLOT library):

function showValue(newValue,el)
{
  document.getElementById(el).innerHTML=parseFloat(newValue).toFixed(2);
  GeneratePlot();
}

function Clamp(x,a,b) {
  return Math.min(Math.max(x, a), b);
};

function NonLinearTransfer(x,a,b) {
  return (1-a)*x + a*Math.pow(1+Math.exp(b-2*b*x),-1);
};

function GeneratePlot() {
    var data = [];
    var a =  document.getElementById("rngNonlinearity").value;
    var b =  document.getElementById("rngAbruptness").value;

    for (var i = 0; i <= 1; i += 0.01)
        data.push([i, Clamp(NonLinearTransfer(i,a,b),0,1)]);
    
    $.plot($("#placeholder"),
          [{ data: data, label: "Transfer function"}],
          { 
                xaxes: [ { min: 0, max: 1 }],
                yaxes: [ { min: 0, max: 1 }],
  legend: { position: 'nw' }
           }
          );
    GenerateGrad();
};

function Blend(k,x,y) {
  return (1-k)*x + k*y;
}

function setPixel(imageData, x, y, r, g, b, a) {
    index = (x + y * imageData.width) * 4;
    imageData.data[index+0] = r;
    imageData.data[index+1] = g;
    imageData.data[index+2] = b;
    imageData.data[index+3] = a;
}

function GenerateGrad() {

element = document.getElementById("canvasGrad");
c = element.getContext("2d");

width = parseInt(element.getAttribute("width"));
height = parseInt(element.getAttribute("height"));

imageData = c.createImageData(width, height);

scolor = [0,255,0];
tcolor = [0,0,255];
c1 =  document.getElementById("rngNonlinearity").value;
c2 =  document.getElementById("rngAbruptness").value;

// draw gradient
for (x = 0; x < width; x++) {
  k = x/width;
  k = NonLinearTransfer(k,c1,c2);
  r = Blend(k,scolor[0],tcolor[0]);
  g = Blend(k,scolor[1],tcolor[1]);
  b = Blend(k,scolor[2],tcolor[2]);
  for (y = 0; y < height; y++) {
    setPixel(imageData, x, y, r, g, b, 0xff);
  }
}

c.putImageData(imageData, 0, 0);
}

Have fun!

Monday, January 24, 2011

Algorithm to determine image contrast

How to determine image contrast ? More specifically - How to determine that image contrast is low and it needs automatic contrast adjustment through histogram equalization method ?

Algorithm is this:
1. Calculate cumulative histogram of image.
2. Make linear regression of cumulative histogram in the form freq(x) = A*x + B.
3. Calculate RMSE of real_cumulative_frequency(x)-freq(x).
4. If that RMSE is close to zero - image is already equalized and should be in good contrast. (That means for equalized images cumulative histograms must be linear)

Because picture is worth a thousand words - this is how cumulative histograms looks for image with different levels of contrast:

So as you see - image with most aggressive contrast has cumulative histogram which is almost a perfect line.

Now fun part - C code which opens image in PGM format with ASCII encoding and detects if image needs contrast adjustment or not :
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>

#define MAX_COLS 1000
#define MAX_ROWS 1000

typedef struct {
 int Width;
 int Height;
 int max_value;
 int data[MAX_ROWS][MAX_COLS];
} PGMdata;

void readPgmFile(char* fileName, PGMdata * pgmOut) {
 FILE * pFile;
 char line[1000];
 char* res;
 int lineNum = 0;

 if ((pFile = fopen(fileName , "r")) == NULL)
  printf("Error opening file %s \n", fileName);
 else {
  while ((res = fgets(line, 100, pFile)) != NULL) {
  lineNum++;
  // max value of pixel
  if (lineNum==4)
   sscanf(line,"%i",&pgmOut->max_value);
  // width and height of image
  if (lineNum==3) {
   sscanf(line,"%i %i",&pgmOut->Width, &pgmOut->Height);
  }
  // load real pixels
  if (lineNum > 4) {
   int ix = lineNum - 5;
   int row = ix/pgmOut->Width;
   int col = ix - row*pgmOut->Width;
   sscanf(line,"%i", &pgmOut->data[row][col]);
  }
  }
  fclose(pFile);
    }
}

void CumulativeHistogram(PGMdata* image, double* hist) {
 int i,j;
 double dp = 1.0/((double) image->Width*image->Height);

 // initializing histogram bins to zero
 for (i=0; i<256; i++) {
  hist[i] = 0.0;
 }

 // calculating histogram
 for (i=0; i<image->Width; i++) {
  for (j=0; j<image->Height; j++) {
   hist[image->data[j][i]] += dp;
  }
 }

 // making histogram cumulative
 for (i=0; i<255; i++) {
  hist[i+1] += hist[i];
 }
}

void CreateXmatrix(double mat[256][2]) {
 int i;
 for (i=0; i<256; i++) {
   mat[i][0] = 1;
   mat[i][1] = i;
 }
}

void TransposeMatrix(double mat[256][2], double tmat[2][256]) {
 int i,j;
 for (i=0; i<2; i++) {
  for (j=0; j<256; j++) {
   tmat[i][j] = mat[j][i];
  }
 }
}

double MultiplyMatrixes(double A[2][256], double B[256][2], int row, int col) {
 int i;
 double sum = 0.0;

 for (i=0; i<256; i++) {
  sum += A[row][i]*B[i][col];
 }

 return sum;
}

double MultiplyMatrixAndVector(double A[2][256], double Y[256], int row) {
 int i;
 double sum = 0.0;

 for (i=0; i<256; i++) {
  sum += A[row][i]*Y[i];
 }

 return sum;
}

double HistogramPredicted(double c0, double c1, double level) {
 return c0 + c1*level;
}

double RootMeanSquare(double* hist, double c0, double c1) {
 double rms = 0.0;
 int i;

 for (i=0; i<256; i++) {
  rms += pow(hist[i]-HistogramPredicted(c0,c1,i),2.0);
 }
 rms /= 256.0;

 return sqrt(rms);
}

void LinearLeastSquares(double* hist, double* c0, double* c1) {
 double X[256][2];
 double tX[2][256];
 double a1,a2,a3,a4;
 double b1, b2;

 // create matrix X composed of x'es
 CreateXmatrix(X);

 // transpose X matrix
 TransposeMatrix(X,tX);

 // calculate tX*X matrix which is
 // [a1 a2]
 // [a3 a4]

 a1 = MultiplyMatrixes(tX,X,0,0);
 a2 = MultiplyMatrixes(tX,X,0,1);
 a3 = MultiplyMatrixes(tX,X,1,0);
 a4 = MultiplyMatrixes(tX,X,1,1);

 // calculate tX*Y  (Y=HISTOGRAM) which is
 // [b1]
 // [b2]

 b1 = MultiplyMatrixAndVector(tX,hist,0);
 b2 = MultiplyMatrixAndVector(tX,hist,1);

 // solve matrix equation by using elimination of variables method
 // [a1 a2]   [c0]   [b1]
 //         *      =
 // [a3 a4]   [c1]   [b2]

 *c1 = (a1*b2-a3*b1)/(a1*a4-a2*a3);
 *c0 = (b1-a2*(*c1))/a1;

}

int main () {
 double hist[256];
 static PGMdata image = {0};
 double c0=0.0, c1=0.0;
 double rms;
 char pgm_file[20] = {0};

 while (1) {
   // get PGM file name
   printf("PGM filename: ");
   scanf("%s", pgm_file);

   // read grayscale image from PGM format
   memset(&image, 0, sizeof(image));
   readPgmFile(pgm_file, &image);
   if (image.Width == 0)
     continue;

   // create cumulative histogram
   CumulativeHistogram(&image, hist);

   // least mean squares method, to find c0,c1 coefficents in equation:
   // c0 + c1*gray_level = Frequency
   LinearLeastSquares(hist,&c0,&c1);

   // calculate RMS of Frequency[bin]-predicted_Frequency(bin)
   rms = RootMeanSquare(hist,c0,c1);

   // Low RMS shows that histogram frequencies are distributed in linear fashion
   // between bins. This means that histogram equalization was performed on image
   // and/or that global image contrast is good.
   if (rms <= 0.01)
    printf("\n==> %s contrast is OK (rms=%f)\n\n", pgm_file ,rms);
   else
    printf("\n==> %s contrast is BAD (rms=%f)\n\n", pgm_file ,rms);
 }

 return 0;
}
If you want to test this algorithm on ASCII PGM image - you better covert image to PGM format with GIMP 2.6.11, because it was only tested in this way.
So by running this C code on below image (after converting it to PGM)

we get algorithm output
"image contrast is BAD ... " :-)