tag:blogger.com,1999:blog-74085695748051886132024-03-13T14:37:10.686-07:00Agnius Vasiliauskas coding sandboxUnknownnoreply@blogger.comBlogger61125tag:blogger.com,1999:blog-7408569574805188613.post-50331827369507113052014-11-26T07:16:00.002-08:002017-10-01T04:50:12.716-07:00Tracking movement path with magnetometer in AndroidThis time I wanted to learn Android development. I've tried to choose some interesting idea for test Android project. So i came to final project mission - drawing user movement path using magnetometer as a compass. You can use this app as a helper to orientation in unknown place or for example it can be used to chart scheme of apartments in some building. I've used this app to draw my flat scheme :-) You can download sources of this application from <a href="https://code.google.com/p/coding-experiments/source/browse/#svn%2Ftrunk%2FPathRecorder">Google project hosting</a> of my project. I've called this app PathRecorder :-) Structure of code:<br>
<span style="color: yellow;">
<br><li>FullscreenActivity.java</li><br><br>
<span style="color: gray;">
Basic usage of activity is: Creation/setup of view which will used for drawing; setup of UI interaction (On Touch listener,etc.); magnetometer sensor setup; initialization of various classes used for project (such as Storage of app data, MenuManager for showing user menu, etc.); starting some threads for program state monitoring.
</span>
<br><br><br><li>MenuItem.java</li><br><br>
<span style="color: gray;">
Used for saving information about current active menu item in UI, such as - menu text and program state which will be fired when user will switch to this menu item. (All communication between application components goes through ProgramState object, it's like a state machine :-) )
</span>
<br><br><br><li>MenuManager.java</li><br><br>
<span style="color: gray;">
Container for MenuItem class instances. Also holds logic for controlling menu system such as - switching current active menu item, loading menu bitmap resources, checking mouse click event and performing menu actions (changing active menu item, executing current item, etc.).
</span>
<br><br><br><li>Movement.java</li><br><br>
<span style="color: gray;">
Holds class MovementStatistics, which is used for computing some information about current movement path, such as - time traveled in some compass direction, times user rotated clockwise or counter-clockwise, etc.
Movement class itself stores LinkedList of user moves, appends nodes to this list as user travels in some direction, communicates to magnetometer sensor fetching current user direction relative to North, converts movement data into representation suitable for drawing movement path in a view (returns Path draw object for canvas).
</span>
<br><br><br><li>Orientation.java</li><br><br>
<span style="color: gray;">
Implements SensorEventListener interface for getting data from magnetometer sensor. Performs all interaction with magnetometer sensor, such as - performing magnetometer calibration (detects accuracy of magnetometer, min/max values of magnetic field, checks for errors in calibration, for example there should be zero or minimum device tilting when in calibration mode, because tilting changes magnetometer output rapidly), calculates degrees to North pole according magnetometer field strength data and passes this information for compass drawing to view.
</span>
<br><br><br><li>ProgramState.java</li><br><br>
<span style="color: gray;">
State machine of application - integrates all application components, so that all components could interact between each other and user interface. Also some threads monitors program state and changes it by calling "check..." method in ProgramState class (for example,- calibration monitor thread informs ProgramState when calibration is finished and UI can show menu for a user...).
</span>
<br><br><br><li>SketchView.java</li><br><br>
<span style="color: gray;">
Exports drawing to bitmap. Performs drawing of visual elements in view such as,- compass, status bar, menu, user movement statistics, user movement path.
</span>
<br><br><br><li>Storage.java</li><br><br>
<span style="color: gray;">
Exports user path drawing to JPG file. Loads/Saves magnetometer calibration data.
</span>
<br><br><br><li>Vector2D.java</li><br><br>
<span style="color: gray;">
Wrapper for vector algebra (for example,- calculation of dot product of vectors, angle between vectors, vector normalization, vector addition, vector rotation, etc...).
</span>
</span>
<br>
Most important classes are Orientation.java and Movement.java.
<br>--------------------------------------------<br>
Orientation.java<br>
--------------------------------------------<br>
Encapsulates control of magnetometer sensor. Now I would like to talk a bit about how direction to North is calculated. Most important fact is that absolute maximum value of magnetic field strength along X axis points to west, and absolute maximum value of magnetic field strength along Y axis points to north. I.e. X axis in magnetometer measures magnetic field strength along west-east axis and Y magnetometer axis measures magnetic field strength along north-south axis. So by having these facts we can calculate direction to North. Consider this picture:<br>
<a href="http://3.bp.blogspot.com/-WAt6q3wrz0w/VHXlXGWvYTI/AAAAAAAAAYY/ZGnpkGhixe0/s1600/directionsCompass.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-WAt6q3wrz0w/VHXlXGWvYTI/AAAAAAAAAYY/ZGnpkGhixe0/s400/directionsCompass.png" /></a><br>So by having magnetometer data we can calculate (a,b) angles in a following way:<br>
<a href="http://3.bp.blogspot.com/-4MhSa8vUgSg/VHXl29Cd3VI/AAAAAAAAAYk/-EE1t62SKHs/s1600/b_degrees.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-4MhSa8vUgSg/VHXl29Cd3VI/AAAAAAAAAYk/-EE1t62SKHs/s400/b_degrees.png" /></a><br>
<a href="http://3.bp.blogspot.com/-3zqyEJ70cqY/VHXl2745dtI/AAAAAAAAAYg/P56lHaEwfmo/s1600/a_degrees.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-3zqyEJ70cqY/VHXl2745dtI/AAAAAAAAAYg/P56lHaEwfmo/s400/a_degrees.png" /></a><br>
The only problem is that calculated angle a is in range 0,180 degrees, and we need full range of -180,+180 degrees. So we need to decide sign of a angle, which can be determined from angle b. By doing that we get full
direction to North angle formula:<br>
<a href="http://4.bp.blogspot.com/-G084h96oV7c/VHXw6irPsjI/AAAAAAAAAZU/baFEqOz5H-M/s1600/a_degrees_full.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-G084h96oV7c/VHXw6irPsjI/AAAAAAAAAZU/baFEqOz5H-M/s640/a_degrees_full.png" /></a>
<br>
So we just need to find min,max values of magnetic field. This is achieved by magnetometer calibration in two phases. Phase 1 - simply measures fluctuation of magnetic field when device is stationary and extracts error of magnetic field magnitude. And second phase of calibration - collects magnetic field strength data when user is turning around itself (i.e. around Z axis). After 1 full rotation min,max values are extracted from this data.
<br>
-----------------------------------------<br>
Movement.java<br>
-----------------------------------------<br>
This class collects user movement information in a form - How much time user traveled in some direction relative to North, by fetching magnetometer data using Orientation.java class. Later view which draws user movement path queries draw data from Movement.java class by calling method "getMovementDataForDraw()", which looks like this: <br>
<pre style="color: green">
public Object[] getMovementDataForDraw(float[] startPoint) {
if (moves == null || moves.size() < 2)
return null;
if (this.isMovementStatisticsNeeded)
this.movementStats.calculateMovementStatistics(moves);
LinkedList<Float[]> points = convertMovesToPoints(startPoint);
float[] limits = getMinMaxCoordsOfPoints(points);
centerPathToScreen(points, startPoint, limits);
LinkedList<Float[]> pointsFiltered = filterOutLinesWithSmallAngleChange(points);
Path path = convertPointsToPath(pointsFiltered);
Object[] ret = new Object[3];
ret[0] = path;
ret[1] = pointsFiltered.getLast();
ret[2] = this.movementStats;
return ret;
}
</pre>
I think that this peace of code is self-descriptive, so I will not make many comments on this :-)<br>
And finally - some picture describing How this movement path looks like when application runs on Android-enabled device. Using this app I walked near the walls of my flat, so by doing this in the end we get my flat scheme, which looks like this:<br>
<a href="http://3.bp.blogspot.com/-zK77bl_kmZY/VHXsuwQMQoI/AAAAAAAAAZI/5MmQScrASjQ/s1600/path_1416766588.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-zK77bl_kmZY/VHXsuwQMQoI/AAAAAAAAAZI/5MmQScrASjQ/s400/path_1416766588.png" /></a><br>
Some explanations,- starting from lower-right corner and going clockwise in picture you will get such rooms: kitchen, bedroom, WC, bathroom, sitting-room.<br>
<br>Have fun developing for Android and using sensors !<br>Unknownnoreply@blogger.com16tag:blogger.com,1999:blog-7408569574805188613.post-49645398103817887762014-11-03T10:31:00.000-08:002014-11-03T10:42:02.590-08:00User identification by his/her keyboard typing patternI would like to show my next masterpiece :-) It would be about biometrics - How to identify a user by it's keyboard typing pattern. Basic approach is not very complex. At first you need to record typing duration of symbols in phrase which user types on keyboard. There are two kinds of typing duration - one is time between key press and key release events, and second - time between key presses of two adjacent symbols in typing phrase. So we accumulate these durations into some list or array and we will some sort of signal. We also need to modify this signal a bit, because it contains noise - some random fluctuation of user's typing pattern. So before we could use this duration info - there is intermediate step - convert typing durations signal into frequency domain using <a href="http://en.wikipedia.org/wiki/Fast_Fourier_transform">FFT transformation</a>. After FFT tranformation we filter-out signal components with high frequency and convert back filtered signal from frequency domain into time domain by FFT transformation. Once we have typing durations with basic typing pattern with small amount of noise - we save this signal in file and later compare it with user's current typing duration pattern. So user identification algorithm is this:
<span style="color: yellow;">
<ol>
<li>Record current typing durations of symbols and durations between symbols in a typing phrase</li>
<li>Convert these durations signal into frequency domain using FFT</li>
<li>Filter-out high frequency signal components</li>
<li>Convert back signal from frequency domain into time domain</li>
<li>Compare this filtered signal with the one recorded from user previously and stored in file.</li>
</ol>
</span>
Some notes about step (5):<br>
There can be multiple choices about how we can compare two signals. I have chosen simple approach - count How much slopes in these two separate curves has same direction, i.e. they both should increase or both should decrease. Number of same direction slopes is divided from total number of slopes and we get curves matching factor. When it will be bigger than some tolerance level - we can say that user who typed phrase is the same who typed previously recorded phrase. I have prepared proof-of-concept Java application which records/detects/shows user typing pattern. You can <a href="https://drive.google.com/file/d/0BwrFeS7BioZia2JsT1psWkJYWnc/view?usp=sharing">download it</a> and as always - use as you wish. A little bit of explanation how Java code is structured:<br>
<span style="color: yellow;">
<li>FftBase.java</li>
<li>Helper.java</li>
<li>SimpleBiometricApplication.java</li>
</span>
<br>FftBase.java<br><br>has functions for performing direct and inverse FFT operation. I downloaded this module from <a href="http://www.wikijava.org/wiki/The_Fast_Fourier_Transform_in_Java_%28part_1%29">this developer</a>.
<br><br>Helper.java<br><br>groups bunch of functions which are used together with FFT operation, such as "filterOutHighFrequencies" (removes high frequency noise from signal in frequency domain), "normalizeFftOutput" (scales Y axis of FFT operation into 0..1 range), "extractTypingPattern" (converts typing pattern into freq. domain, removes noise, converts back into time domain, scales output to 0..1 range and returns data to the user), "loadTypingPattern" (loads recorded typing pattern from CSV file into double array), "generateCsvFileForTypingDurations" (saves typing pattern into CSV file).
<br><br>SimpleBiometricApplication.java<br><br>Swing application which uses above modules for recording/detecting user typing pattern. Typing patterns are also pictured in a graph using <a href="http://www.jfree.org/jfreechart/">JfreeChart</a> java library. It was my first experience with Swing. At first I thought to use JavaFX, which is cool also and more configurable than Swing, but at the moment I didn't found GUI builder for Fx and because Swing is well known and used in java developing industry - I decided to learn Swing a bit. It was nice experience that you can set custom code in Swing builder which modifies some GUI component property. I just miss the feature that this custom code editor could accept anonymous function for setting some property. Now it just accepts such arguments which are accepted by Swing component some "set..." method. Probably the problem was that at the time when Swing was written - java had no anonymous functions - they could came later. And custom code for setting some property can be lengthy. It is good when this swing component set... method accepts some class - when you can write in editor anonymous class and pass it to the custom code editor. But this not always helps, because accepted class in set method parameters can implement such interface from which you CAN'T cast back into class accepted by set method. For example - i needed to modify JFrame bounds property which accepts Rectangle. Rectangle implements Shape interface. So I thought I will pass custom anonymous class made from Shape into setBounds method accepting Rectangle. But I couldn't do that because the reason was that Shape can't be converted to Rectangle class, no matter that Rectangle itself implements Shape. Comma operator in Java would help also in this case, but we don't have comma operator (at least for now). But otherwise GUI builder is very helpful, has a lot of components.<br><br>
And finally - how my Swing form looks like - when typing pattern is detected for the user:<br>
<a href="http://1.bp.blogspot.com/-tDvbXwA0Lp4/VFfIWxNegYI/AAAAAAAAAYA/ROTa-PIQUfQ/s1600/typingBiometrics.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-tDvbXwA0Lp4/VFfIWxNegYI/AAAAAAAAAYA/ROTa-PIQUfQ/s400/typingBiometrics.png" /></a><br>
If user is the same who typed original recorded message - you will see such messageBox:<br>
<a href="http://2.bp.blogspot.com/-eVhVu1d0yKU/VFfIxXhSZpI/AAAAAAAAAYI/hgBpwVVMrYc/s1600/userVerification.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-eVhVu1d0yKU/VFfIxXhSZpI/AAAAAAAAAYI/hgBpwVVMrYc/s400/userVerification.png" /></a><br>
For me most interested coding parts was related with FFT stuff and signal comparison. You should like it also !<br>
Have fun in signal processing !Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-7408569574805188613.post-60419681317966968172014-10-18T10:07:00.001-07:002014-10-20T02:35:38.700-07:00Reading between the lines<p style="background-color: DarkSlateGray">befoRe anna paVlOvNa And the otHers HaD timE tO smile their appreCiAtion of tHe ViCoMte's EpigRaM, Pierre aGaIn BrokE iNtO the CoNvErsation, AnD thoUgh Anna Pavlovna fElT sUre he wouLd Say sOmEthiNg InapPropRiate, she Was uNable tO sTop hIm.
"ThE execution of thE DUc d'ENgHien," deClarEd monsIeur PierRe, "WaS a PoliTiCal nEcessity, And iT sEems to me thAt napoleon ShOwEd GreaTnEss of sOul by not FeArInG to take oN himSeLf thE wHoLe respOnsibiLity Of thAt deed."
"DiEu! mon DIeu!" mUtTeRed ANnA PavLoVna in a teRrified whiSpEr.
"What, MoNsieUr piErRe... Do you consider that assassination shows greatness of soul?</p>
Can you guess what this text represents ? :-) It's extract from the book "War and Peace" but not only that. This peace of text holds secret message also. When this text is decoded it gives such secret message:<br>
<p style="background-color: DarkSlateGray">the quick brown fox jumps over the lazy dog.</p>
So this time i would like to add new meaning to phrase "reading between the lines" in computing world. Actually it's better to say "reading between the chars" in this case :-) Talk will be about text steganography - how to hide some secret message, aka. payload in original text fragment. Idea is to modulate letter casing so that casing of letters itself in original text would carry additional information which can be used for example to encode some secret message in text. So at first you need to choose some prefixed alphabet of characters to be used in secret message. It's better to have small alphabet, because bigger set will result in a need for a longer original text so that it could contain all secret message. I've used these chars for secret message - 'qwertyuiopasdfghjklzxcvbnm. |'. It's 30 characters. So you need at least 30 different secret states to represent these characters in text. I've chosen letter case to represent this secret state. Upper case of letter means bit 1, lower case - 0. So by using such scheme you can encode 1 bit into letter casing of 1 character (or 2 total states - uppercase,lowercase). Next step is to calculate how much letters of original text you need to encode 1 secret letter by using such scheme. We need to multiply number of states of each letter, so if one letter holds 2 states, 2 letters holds 2*2=4 states and so on. So 5 letters holds 2*2*2*2*2 = 32 states or 5-bit integer of additional information which is enough for our 30 letters alphabet. This integer will represent our secret letter index in our alphabet. So for 1 secret char you need 5 original text chars. General encoding algorithm idea is this (decoding procedure is very similar just some steps are in reverse order):<br>
<span style="color: yellow;">
<ol>
<li>Get next secret char from the secret message</li>
<li>Get secret char index in your alphabet</li>
<li>Decompose this index into 5-bit binary pattern</li>
<li>Encode this 5-bit pattern into 5 letters from original text using uppercase mode as bit 1, lowercase mode - as bit 0</li>
<li>Repeat everything from point 1 until all chars of secret message are encoded</li>
</ol>
</span>
I've prepared for this algorithm Java program, you can <a href="https://drive.google.com/file/d/0BwrFeS7BioZiNHRWWnU1SWlRdGs/view?usp=sharing">download it</a> and use as you wish,- as always no restrictions applied - use at your own risk :-) . There are just two classes - Test.java is main program and TextSteganography.java is class which encapsulates encode/decode methods for secret messages. Class TextSteganography.java holds two most important methods - "modulateText" and "extractSecretMessage" which code is this:<br>
<pre style="color: green">
public String modulateText(String text, String hiddenText) {
hiddenText += '|';
char[] inp = text.toCharArray();
char[] hidden = hiddenText.toCharArray();
int letterIx = -1;
CheckCondition(containsOnlyValidChars(hiddenText), String.format("Hidden text can only contain chars from the set '%s'",
validChards.substring(0, validChards.length()-1)));
int letterCount = countLetters(text);
CheckCondition(letterCount >= 10 * hiddenText.length(),
String.format("Must be at least %d letters in text, but given %d for current payload message", 10 * hiddenText.length(), letterCount));
for (int i = 0; i < hidden.length; i++) {
int ix = validChards.indexOf(hidden[i]);
String patternString = "0000" + Integer.toBinaryString(ix);
char[] pattern = patternString.substring(patternString.length() - 5).toCharArray();
boolean[] convertToUppercase = upperCaseIsNeeded(pattern);
for (int j = 0; j < convertToUppercase.length; j++) {
letterIx = getNextLetterIndex(inp, letterIx);
inp[letterIx] = (convertToUppercase[j])? changeLetterCase(inp[letterIx], LetterCase.UPPER_CASE) :
changeLetterCase(inp[letterIx], LetterCase.LOWER_CASE);
letterIx = getNextLetterIndex(inp, letterIx);
}
}
return String.valueOf(inp);
}
public String extractSecretMessage(String text) {
LinkedList<Character> charList = new LinkedList<>();
char[] inp = text.toCharArray();
int letterIx = -1;
int letterCount = 0;
int binaryPattern = 0;
int newBit;
while (true) {
letterIx = getNextLetterIndex(inp, letterIx);
if (letterIx != -1) {
letterCount++;
newBit = (getLetterCase(inp[letterIx]) == LetterCase.UPPER_CASE)? 1 : 0;
binaryPattern = (binaryPattern << 1) + newBit;
if (letterCount == 5) {
char c = validChards.charAt(binaryPattern);
if (c != '|')
charList.add(c);
letterCount = 0;
binaryPattern = 0;
if (c == '|')
break;
}
letterIx = getNextLetterIndex(inp, letterIx);
}
CheckCondition(letterIx != -1, "Text was not encoded with this steganography encoder !");
}
return charListToString(charList);
}
</pre>
Some additional interesting comments. At first I wanted to use assertions in Java. But I've found information that assertions are not recommended in general, at least in production code. Besides they are turned-off by default in Java virtual machine. You need to turn them on by -ea switch. But I like assertions very much, because they gives you an opportunity to check some conditions in fast way without writing 'if(s)'.
So I wrote function "CheckCondition" which is assert analog but acts in recommended way - throws an exception if condition is not met. Next interesting note - when I profiled an application I found that secret message extraction is relatively slow because each decoded character was appended to output string by StringBuilder. And that's natural, because as I've understood String is just a wrapper around char array. And as we know appending new array element is slow procedure, because you need to re-size array, copy elements to new array and etc, and this takes time. So instead I append new decoded characters into LinkedList of chars and later I converted this list into usual String. This approach works faster when you need to build new String incrementally. That's all.<br>
Have fun in using text steganography !Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-65951343617337129752014-09-26T05:02:00.000-07:002014-09-26T05:21:14.857-07:00Solving eight queens problem with random walk algorithmRecently I wrote article about solving eight queens problem with <a href="http://coding-experiments.blogspot.com/2014/09/solving-eight-queens-problem-with.html">electrical force analogy</a>. But after thinking some time I came to conclusion that repulsion force from the board edges and repulsion force between queens will force queens to move in random fashion. So in general this queens behavior should be comparable to <a href="http://en.wikipedia.org/wiki/Random_walk">random walk algorithm</a>. So seems that at least in this case concept of electric force is not needed - we should follow an <a href="http://en.wikipedia.org/wiki/Occam's_razor">Occams's razor</a> principle and simply use random walk algorithm which will generate comparable results. Algorithm principle is very simple - one needs to have just a bunch of initial random values for some parameters (in queen case parameter would be X or Y position for the queen) and then in each subsequent iteration one just needs to apply some small random shift to each parameter value until solution is found. So this random walk concept I implemented in <a href="https://drive.google.com/file/d/0BwrFeS7BioZiWUQ3NG9tR09oUm8/edit?usp=sharing">latest Java eight queens solver</a> which you can download and use without restrictions. Code structure:<br>
<br />- <b><span style="color: yellow;">JavaEightQueensProblem</span></b>.<br /> Main class which employs random walk algorithm for solving eight queens problem.<br>
<br />- <b><span style="color: yellow;">RandomWalkerParameter</span></b>.<br /> Class which holds random value to shift in each algorithm iteration. Besides it holds two ranges - random value range, which defines number limits for random value and shift step range, which defines number limits for shift step. In each iteration random shift step is added to current random value.<br>
<br />- <b><span style="color: yellow;">RandomWalkerUnit</span></b>.<br /> Class holding list of random walk parameters and state which defines unit has reached terminating value or not. If unit has reached terminating value - subsequent random shifts are not applied to unit parameters. Basically this class is just needed for grouping parameters. For example queen random unit will have two parameters - X,Y. So in queen case both values X,Y will be randomly mutated until queen will reach good position on board. So calculation terminating condition is meaningful only for X,Y pair - i.e. random unit will have two X,Y parameters in list for queen problem case.
<br />- <b><span style="color: yellow;">RandomWalker</span></b>.<br /> Class which executes random walk algorithm. Most important things which should be passed into RandomWalker constructor is: list of random walker units and object which will have method for checking if random walk unit has reached terminal state or not. This method will be then executed dynamically by using java reflection in each subsequent random walk iteration. By the way, you can pass class instance OR class itself into this calculator object parameter. Passing class itself is good if your terminal condition calculation method is static - so in this case you don't have to create class instance for calling method and thus it is reasonable for random walk constructor to accept class object itself. Next when random walker object is constructed one just needs to execute one method named "startRandomWalkSearch()" to begin looking for solution to some problem. This random search algorithm has no special magic in it :-), basic code is simple like this:<br>
<pre style="color: green">
public void startRandomWalkSearch() {
Random rnd = new Random();
int iterationsUntilRestart = 0;
initializeRandomWalkUnits();
for (int i = 0; i < iterationsToTry; i++) {
iterationsUntilRestart++;
if (unitsNotInTerminalState == 0) {
iterationsUntilSolution = i + 1;
return;
}
if (iterationsUntilRestart > restartRandomWalkAfterIterations) {
iterationsUntilRestart = 0;
initializeRandomWalkUnits();
continue;
}
for (int iUnit = 0; iUnit < randomWalkUnits.size(); iUnit++) {
if (!randomWalkUnits.get(iUnit).terminalConditionReached) {
for (RandomWalkerParameter randomWalkParameter : randomWalkUnits.get(iUnit).randomWalkerParameters) {
double stepValue;
double stepValueDiff = randomWalkParameter.stepRange[1] - randomWalkParameter.stepRange[0];
// calculate step value
if (randomWalkParameter.onlyIntegralValuesInStep)
stepValue = randomWalkParameter.stepRange[0] + (double)rnd.nextInt((int)(stepValueDiff + 1.0));
else
stepValue = randomWalkParameter.stepRange[0] + stepValueDiff * rnd.nextDouble();
// apply step
randomWalkParameter.value += stepValue;
randomWalkParameter.value = Math.min(randomWalkParameter.value, randomWalkParameter.valueRange[1]);
randomWalkParameter.value = Math.max(randomWalkParameter.value, randomWalkParameter.valueRange[0]);
}
// check terminal condition
Class[] types = new Class[2];
types[0] = List.class;
types[1] = int.class;
try {
boolean isJavaClass = terminalConditionCalculatorObject.getClass().toString().contains("java.lang.Class");
Method method;
if (isJavaClass)
method = ((Class<?>)terminalConditionCalculatorObject).getMethod(terminalConditionCalculatorMethod, types);
else
method = terminalConditionCalculatorObject.getClass().getMethod(terminalConditionCalculatorMethod, types);
randomWalkUnits.get(iUnit).terminalConditionReached =
(boolean)method.invoke(terminalConditionCalculatorObject, randomWalkUnits, iUnit);
if (randomWalkUnits.get(iUnit).terminalConditionReached)
unitsNotInTerminalState--;
}
catch ( NoSuchMethodException |
NullPointerException |
SecurityException |
IllegalAccessException |
InvocationTargetException e) {
throw new RuntimeException(e.toString());
}
}
}
}
}
</pre>
Algorithm idea is to initialize each unit's parameters to some random value. Then there is for loop which applies small random step to parameter value and then checks if this unit has reached terminal(solution) state or not, by calling terminal condition calculation method using reflection.<br>
And there is some unit test package which checks if random walk algorithm works - if you will want to modify algorithm. So you will need to see if random walk has good integrity after that or not :-)
There is just one test package:<br>
<br />- <b><span style="color: yellow;">RandomWalkerTest</span></b>.<br />. What it does is - creates random walk object for performing random search if some number X can be factorized into A * B. This test package has two tests inside - <b>testRandomWalkSearchNumberWasFactorized</b> which tests if number was factorized correctly and test named <b>testRandomWalkSearchNumberWasNotFactorized</b> which expects that factorization must fail, because we pass prime number for factorization, which of course can't be factorized into smaller parts.<br>
Have fun in using random walk search algorithm !!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-31390865892793651812014-09-17T03:10:00.001-07:002014-09-17T03:32:07.586-07:00Solving eight queens problem with electrical force analogyIn this problem I wanted to design and test some innovative algorithm approach for finding solution to eight queens puzzle. Algorithm idea is based on electrical force analogy. Hypothesis for algorithm design was that if queens are affected by some electrical field which pushes queens from each other by some law - after some iterations queens should arrange into stable positions which will compose solution. Idea was successful. But for this algorithm to work one needs 3 repulsion forces:
- electrical force. (Two queens pushes themselves apart so that distance between them increases. Imagine queens as some electric particles of same charge, for example electrons :-) ).<br />
- repulsion from same line. (If two queens share same horizontal,vertical or diagonal - they pushes each other so that they must leave same line.)<br />
- repulsion from board borders. (Borders are imagined as same charge particles as queens, so borders pushes queens to the center of board.)<br />
Below are pictures which graphically represents these 3 pushing forces. Forces are pictured as vectors by which green queen affects blue queen.
<br /><br /><b>Electrical force:</b><br />
<a href="http://4.bp.blogspot.com/-eL_Sf2oMVg0/VBk6my_AHLI/AAAAAAAAAXQ/VkojIdp_IQc/s1600/forceElectric.png" imageanchor="1"><img border="0" src="http://4.bp.blogspot.com/-eL_Sf2oMVg0/VBk6my_AHLI/AAAAAAAAAXQ/VkojIdp_IQc/s320/forceElectric.png" /></a><br />
<br /><b>Repulsion from same line force:</b><br />
<a href="http://2.bp.blogspot.com/-N2QfEk1ihbQ/VBk7Gtm33kI/AAAAAAAAAXY/6crusqOPTdU/s1600/forceRepulsionFromTheSameLine.png" imageanchor="1"><img border="0" src="http://2.bp.blogspot.com/-N2QfEk1ihbQ/VBk7Gtm33kI/AAAAAAAAAXY/6crusqOPTdU/s320/forceRepulsionFromTheSameLine.png" /></a><br />
There are six possible repulsion directions from the line occupied by two queens. Currently algorithm calculates these six repulsion directions and takes one random direction from those six. Not all directions are equal.
It can be that those two queens may be moved to other line at next iteration. So algorithm may be improved to skip directions which forms other common line between those two queens. But this improvement was not in the scope of this article :-)
<br /><br /><b>Repulsion from board borders:</b><br />
<a href="http://4.bp.blogspot.com/-pAehoWNMNgQ/VBk8bImqObI/AAAAAAAAAXg/BIl7M8IBYNY/s1600/forceBorder.png" imageanchor="1"><img border="0" src="http://4.bp.blogspot.com/-pAehoWNMNgQ/VBk8bImqObI/AAAAAAAAAXg/BIl7M8IBYNY/s320/forceBorder.png" /></a><br />
It was imagined that borders pushes queens to the center of board - as board would be a circle, not a rectangle. So this approximation is not very accurate - next improvement for algorithm :-) But for this scope that approach is good enough and works.
So in the end superposition of these three forces accumulated from all queens and borders pushes targeted queen into right position - partial solution of eight queens problem. And finally we get solution.
Algorithm was developed with JAVA programming language. This is my first experience with JAVA, so don't expect nice OO patern :-)
I was developing with NetBeans IDE, which is a great tool for JAVA. You can download this solution and use as you need from <a href="https://drive.google.com/file/d/0BwrFeS7BioZiQVBTZm1nMERpbEE/edit?usp=sharing">here</a>.
Class structure of JAVA project is following:<br /><br />
- <b><span style="color: yellow;">ChessBoard</span></b>.<br /> Holds information about queens, chess board representation data for printing board to output, groups code which targets queen objects.<br />
<br />- <b><span style="color: yellow;">CollisionDetector</span></b>.<br /> Very small class which actually holds just one function "hasCollision" for calculating are two queens on the same position.<br />
<br />- <b><span style="color: yellow;">EightQueensSolver</span></b>.<br /> Abstracts solution problem at a highest level. Most important method is SolveEightQueensProblem() which is structured as following:<br />
<pre style="color: green"> public boolean SolveEightQueensProblem() {
final int totalIterations = 10000;
boolean found;
for (int iterations = 0; iterations < totalIterations; iterations++) {
findMaximumTwoWrongQueens();
if (currentWrongPositions == 0)
return true;
if (currentWrongPositions == 1) {
found = findBruteForceSolutionFor1Queen();
if (found)
return true;
}
if (currentWrongPositions == 2) {
found = findBruteForceSolutionFor2Queens();
if (found)
return true;
}
}
return false;
}
</pre>
All our algorithm magic goes into function <b>"findMaximumTwoWrongQueens()"</b> - where by using above mentioned forces algorithm finds out at most two queens in wrong positions and then later SolveEightQueensProblem() performs brute force on those two queens positions. <b>findMaximumTwoWrongQueens</b> is structured as following (not very important code here is skipped for representing reader most useful algorithm part.):<br />
<pre style="color: green"> private void findMaximumTwoWrongQueens() {
final int iterationsForTwoWrongPositions = 300;
currentWrongPositions = 8;
while (currentWrongPositions > 2) {
chessBoard = new ChessBoard();
chessBoard.RandomlyPlaceEightQueens();
for (int i = 0; i < iterationsForTwoWrongPositions; i++) {
chessBoard.ApplyElectricForce();
chessBoard.UpdateChessBoard();
currentWrongPositions = chessBoard.numberOfQueensInWrongPositions();
if (currentWrongPositions < 3)
break;
}
}
}
</pre>
Here most interesting code is function <b>ApplyElectricForce()</b> which calculates new movement direction for each queen by calling method <b>CalculateGlobalMovementDirection(Queen, ChessBoard)</b>.
Method <b>CalculateGlobalMovementDirection()</b> is structured as following:<br />
<pre style="color: green"> public Tuple<integer nteger=""> CalculateGlobalMovementDirection(Queen queen, ChessBoard chessBoard) {
Tuple<integer nteger=""> direction = Tuple.zeroTuple;
Tuple<integer nteger=""> directionInfluenceZoneRepulsion = Tuple.zeroTuple;
Tuple<integer nteger=""> directionElectricForceFromQueen = Tuple.zeroTuple;
Tuple<integer nteger=""> directionElectricForceFromBoardBorders = Tuple.zeroTuple;
if (queen == null)
throw new RuntimeException("queen is null");
if (chessBoard == null)
throw new RuntimeException("chessboard is null");
for (int i = 0; i < 8; i++) {
if (queen.Id.equals(chessBoard.queens[i].Id) == false) {
directionInfluenceZoneRepulsion = RepulsionFromTheSameLineDirectionFromAnotherQueen(queen, chessBoard.queens[i]);
directionElectricForceFromQueen = ElectricForceDirectionFromAnotherQueen(queen, chessBoard.queens[i]);
if (Tuple.AreTuplesTheSame(directionInfluenceZoneRepulsion, Tuple.zeroTuple))
directionElectricForceFromQueen = Tuple.zeroTuple;
direction = Tuple.AddTuples(direction,
Tuple.AddTuples
(
Tuple.multiplyScalar(6, directionInfluenceZoneRepulsion),
Tuple.multiplyScalar(2, directionElectricForceFromQueen)
)
);
}
}
// add borders effect
directionElectricForceFromBoardBorders = ElectricForceDirectionFromBoardBorders(queen);
direction = Tuple.AddTuples(
Tuple.multiplyScalar(1, direction),
Tuple.multiplyScalar(5, directionElectricForceFromBoardBorders)
);
return Tuple.NormalizeTuple(direction);
}
</integer></integer></integer></integer></integer></pre>
This method is the key to success of our algorithm and is most important code part.<br />
<br />- <b><span style="color: yellow;">Geometry</span></b>.<br /> This class abstracts methods for detecting if two queens are on the same line - be it horizontal,vertical or diagonal.<br />
<br />- <b><span style="color: yellow;">JavaEightQueensProblem</span></b>.<br /> Class which holds main() method, i.e. main program.<br />
<br />- <b><span style="color: yellow;">Queen</span></b>.<br /> Very tiny class which just abstracts some queen attributes such as queen identification on board, queen position, queen movement direction and queen state determining if it reached position not influenced by other queens or not.<br />
<br />- <b><span style="color: yellow;">RepulsionForce</span></b>.<br /> This class has several very important methods: <b>ElectricForceDirectionFromAnotherQueen</b> - calculates direction of target queen affected by source queen electrical force; <b>RepulsionFromTheSameLineDirectionFromAnotherQueen</b> - calculates direction of target queen affected by source queen force which pushes queen from the same line; <b>ElectricForceDirectionFromBoardBorders</b> - calculates direction of target queen affected by board borders electrical force which pushes queen towards the center of board; <b>CalculateGlobalMovementDirection</b> - calculates direction of target queen which is aggregation of all directions produced by forces of 7 other queens and direction produced by board borders repulsion force.<br />
<br />- <b><span style="color: yellow;">Tuple</span></b>.<br /> Helper class which introduces Tuple abstraction and methods which groups vector algebra operations. Here Tuple is used as vector in plane.<br />
That's it about solution code. Now I think it would be fun to see some movie of solution formation under effect of repulsion forces. On each algorithm run some random solution is found. Here it is my favorite one:<br />
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dxWys3eem3AnLzOcHivixID_reSApxP9GNTA6IK2QyqVG-bh3X7fN7w5stfQeAes3FgQO23Q3tzKit5uV8QyA' class='b-hbp-video b-uploaded' frameborder='0'></iframe><br>
So in this algorithm run after 1397 iterations queens arranges into such beautiful solution lattice:<br>
<a href="http://3.bp.blogspot.com/-FQME-9Jbwhk/VBlY7k_rJpI/AAAAAAAAAXs/4dxSWzHWUbo/s1600/sol21.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-FQME-9Jbwhk/VBlY7k_rJpI/AAAAAAAAAXs/4dxSWzHWUbo/s320/sol21.png" /></a><br>
<br /><b>Conclusion:</b><br /> Seems electrical force analogy is useful for solving eight queens problem and may be useful too for solving other general problems where not exists exact algorithm for finding solution, but some general algorithms are used - such as genetic algorithms, genetic programming, etc. One just needs to make a good abstraction for repulsion/attraction forces which pushes problem state from bad solutions or attracts towards better partial ones. <br>Have fun using electrical force analogy in algorithms !Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-46223894533432428532014-07-30T11:40:00.003-07:002014-07-30T11:47:47.198-07:00Smoking effects on cigarette filter colorThis time I will present some mini research about how smoking induces filter color changes. Experiment was following - I made cigarette filter photograph after each inhalation with camera.
Later these filter pictures were processed with C++ program to determine filter average normalized opacity and how it changes with relation to inhalation number. You can download this C++ program which analyzes this effect from <a href="https://drive.google.com/file/d/0BwrFeS7BioZic1d4cHR5N0daTkk/edit?usp=sharing">here</a>. This zip includes C++ project itself and PGM pictures of cigarette filter shot after each inhalation. C++ project is simple - it just has following parts:<br><br>
- pgmImage class which has loading and saving methods for PGM image type.<br>
- cigaretteFilterAnalyzer class which basically calculates normalized averaged filter opacity in each image.<br>
- cigaretteFilterEffects.cpp program itself which outputs experiment data on stdout.<br><br>
Basic effects as it should be of course is that with each inhalation filter gets darker and darker. I compiled 13 pictures into one to show this effect:<br>
<a href="http://4.bp.blogspot.com/-Sg0NaOpoObQ/U9kxhhikDYI/AAAAAAAAAWY/IlX8qFVVls4/s1600/filterAfterInhalations.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-Sg0NaOpoObQ/U9kxhhikDYI/AAAAAAAAAWY/IlX8qFVVls4/s400/filterAfterInhalations.png" /></a>
<br>Number below each filter image represents inhalation number.
In addition to visual appeal of this effect I made plot from the experiment data processed with C++ program. You can see this plot how filter opacity changes over inhalation number here:<br>
<a href="http://4.bp.blogspot.com/-3lDPy7kfsnQ/U9kyXuWMH5I/AAAAAAAAAWk/uXFttTmuzpk/s1600/filter_opacity.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-3lDPy7kfsnQ/U9kyXuWMH5I/AAAAAAAAAWk/uXFttTmuzpk/s400/filter_opacity.png" /></a>
<br>In this graph I added linear data fit. Linear data fit has R^2 of 0.95, so it seems that linear fit describes opacity changes pretty well.
What is interesting that given relative filter opacity you can deduce how many inhalations was taken. I bet there can be more interesting ideas to explore which even further expands on
research,- for example this effect should depend on filter quality. So basically some research can be done to explore how filter quality affects opacity effect. Which could be interesting to cigarette manufacturers. But that is just a guess. Also this effect should depend on filter structure, size, cigarette type, materials being used, inhalation duration, etc. For example cigarettes used for this experiment were with menthol capsule, so some additional effect arrived which relates how smoke propagates through menthol capsule.
Also below is interesting picture about how cigarette filter looks like without smoking:<br>
<a href="http://3.bp.blogspot.com/-T6wPjKK_PKk/U9k2TkuJ02I/AAAAAAAAAWw/1TUYN6e-ycM/s1600/filterPorosity.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-T6wPjKK_PKk/U9k2TkuJ02I/AAAAAAAAAWw/1TUYN6e-ycM/s400/filterPorosity.png" /></a>
<br>Filter picture was converted to gray-scale and histogram equalization method was performed on image to amplify color differences between pixels. Because some color changes are too small for an eye to see, but after histogram equalization it is easy to see smaller differences of pixels. After performing this we get nice picture of filter porosity :-)
As I've said these cigarettes were with menthol capsule. So bellow is also one image after 12 inhalations converted to gray-scale and with histogram equalization performed:<br>
<a href="http://2.bp.blogspot.com/-LraqRrP6cH4/U9k3WosdZ0I/AAAAAAAAAW8/Yfi9MgS43AA/s1600/mentholCapsuleEffect.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-LraqRrP6cH4/U9k3WosdZ0I/AAAAAAAAAW8/Yfi9MgS43AA/s400/mentholCapsuleEffect.png" /></a>
<br>Some lines were added to indicate menthol capsule. It can be clearly seen that capsule has strait in the middle - it is the place where it was crushed with the fingers before smoking. Otherwise you will not get menthol taste :-) At first I haven't understood why such shape appears in almost all pictures. I thought that we get concentric darker zone in the filter middle just because somehow smoke
propagates better through the center of filter. But for this being true, there must be some randomness in each filter picture, because you can't guarantee that in each inhalation smoke will propagate in the same way. But in contrary this shape was too clear and similar between all images. So just very determined situation can induce the same shape effect. Best explanation was menthol capsule effect. Maybe there are more explanations - I don't know :-)
This is it. People who smokes can make similar experiment themselves. But I bet it is better not to smoke at all because not just the opacity of filter changes but there are more serious effects on health too. And this opacity effect shows indirectly this too, because the reason why opacity gets greater with each inhalation is that filter becomes more polluted with each inhalation.
So you better stop smoking :-)
Regards,
AgniusUnknownnoreply@blogger.com1tag:blogger.com,1999:blog-7408569574805188613.post-79771361243695031982014-01-01T04:30:00.000-08:002014-01-01T04:30:33.130-08:00ASP.NET web site for generating test dataHave you ever wanted ASP.NET site (and it's source code) with functionality as in <a href="http://www.generatedata.com">generatedata.com</a> (which aims at generating test data) ?
If so - just check my <a href="http://generatedata.codeplex.com">newest creation</a> at codeplex. License is MIT, so you can use it as you wish.
Have fun at generating test data !
Best wishes,
AgniusUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-40516984109405302932013-09-22T14:55:00.002-07:002013-09-25T14:23:03.604-07:00Building logic gates from dominoDo you feel boring ? Try to build <a href="http://en.wikipedia.org/wiki/Domino_computer">computer from dominos</a> :-). It's at least theoretically possible to do it, but of course practically unfeasible. Practically it's feasible to build some primitive logic circuits from dominos. It is challenging and fun, because domino system is highly unstable so it is hard to construct 100% deterministic logic functions from it. We will try to build <a href="http://en.wikipedia.org/wiki/AND_gate">AND logic gate</a> here. According to wikipedia it's easy to build OR and XOR gates like that:
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-a4S77PUi3oY/Uj9lsOwleJI/AAAAAAAAATs/jOjV9wdsqE4/s1600/dominoOrXor.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-a4S77PUi3oY/Uj9lsOwleJI/AAAAAAAAATs/jOjV9wdsqE4/s1600/dominoOrXor.png" /></a></div>
<br />
<br />
So AND gate can be composed from these two gate types as wiki states by formula: <br />
<blockquote class="tr_bq">
<b style="background-color: white;"><span style="color: #38761d;">A AND B = (A OR B) XOR (A XOR B)</span></b></blockquote>
But here we have two problems:<br />
<ul>
<li>We need many parts of dominos, because according to formula we need 2 XOR gates and 1 OR gates. Besides XOR gate is relatively big.</li>
<li>Also as wiki states that XOR gate is extremely dependent on timing. So because we have 2 XOR gates here - dependence on timing will increase too.</li>
</ul>
This means that AND gate constructed by that scheme is highly unstable and/or needs many domino parts. Of course we can try to think about new XOR gate design from the ground-up, but still we would need 3 gates and also it is not in the scope of this article (maybe we will try to construct XOR gate in some future article). So what is left ?
Answer is - we will try to construct AND gate from the ground-up with better stability and with less domino parts needed.
After some thinking and trial/error process I came to such AND gate design:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-s-8xutbgvX4/Uj9f26eN4KI/AAAAAAAAATc/H2uQe-tmynE/s1600/dominoANDgate.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-s-8xutbgvX4/Uj9f26eN4KI/AAAAAAAAATc/H2uQe-tmynE/s320/dominoANDgate.png" /></a></div>
<br />
Well my scheme has it's own instability which has dependence on distance between dominos. But in my opinion distance can be more easily adjusted than timing between two events without any external devices ;-) So this is how my domino AND logic gate performs in live environment:<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dw5UBPpxDQwWYiRGfp79ZVr-DDX6LRFibXnTF_jQAeL85Ixk-ZGuDv6sZnc_C_MO6Azrv16C6SH-pEPUvv6Ng' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div>
<br />
And here is an example schema how NOT gate could be built:<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dzuNgMC9Ihy0paXxk5gMApGQ8RuJ37UDEXJvGB1WYwldsloTnnsjYDHEl_lfbfFUVWHdbyV57o3fBY5Ed7GgQ' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div>
<br />
Have fun with domino computing ! :-)Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-7408569574805188613.post-22811461571795046172012-09-26T12:09:00.001-07:002015-07-15T03:25:29.882-07:00DNA sequence visualizationWhile reading this very interesting article about <a href="http://www.newscientist.com/article/mg21528826.200-a-brief-history-of-the-human-genome.html">history of human genome</a> I stumbled upon a fact that we have portion of our DNA that is 3 billion years old !! That's pretty damn a big number of years for some DNA sequence to survive in between of trillions of our ancestors. That's get my attention and I wanted to see how this 3 billion year sequence looks like in visualized form. But after a short google search I didn't found a simple DNA sequence visualizer. So I've decided to code that simple DNA sequence visualizer in browser with the help of javascript. So here it is:</br>
(by default this 3 billion years old DNA sequence shared among all living creatures is set, but you can paste your own sequence as well - hit the button to see it).
<script>
function mix(color1, color2, ratio) {
var f = function(x,y){ return parseInt(x*(1.0-ratio)+ y*ratio); };
var color = new Object();
color.r = f(color1.r,color2.r);
color.g = f(color1.g,color2.g);
color.b = f(color1.b,color2.b);
return color;
}
//*****************************************
function parseColor(color) {
return 'rgba('+color.r+','+color.g+','+color.b+',1)';
}
//********************************************
function roundRect(ctx, x, y, width, height, radius, color) {
if (typeof radius === "undefined") {
radius = 5;
}
ctx.fillStyle = parseColor(color);
ctx.beginPath();
ctx.moveTo(x + radius, y);
ctx.lineTo(x + width - radius, y);
ctx.quadraticCurveTo(x + width, y, x + width, y + radius);
ctx.lineTo(x + width, y + height - radius);
ctx.quadraticCurveTo(x + width, y + height, x + width - radius, y + height);
ctx.lineTo(x + radius, y + height);
ctx.quadraticCurveTo(x, y + height, x, y + height - radius);
ctx.lineTo(x, y + radius);
ctx.quadraticCurveTo(x, y, x + radius, y);
ctx.closePath();
ctx.fill();
}
// **********************************************************
function drawShadedRect(ctx, x, y, width, height, radius, boxColor, backColor) {
roundRect(ctx, x, y, width, height, radius, mix(boxColor,backColor,1.0));
roundRect(ctx, x+1, y+1, width-1*2, height-1*2, radius, mix(boxColor,backColor,0.8));
roundRect(ctx, x+2, y+2, width-2*2, height-2*2, radius, mix(boxColor,backColor,0.6));
roundRect(ctx, x+3, y+3, width-3*2, height-3*2, radius, mix(boxColor,backColor,0.4));
roundRect(ctx, x+4, y+4, width-4*2, height-4*2, radius, mix(boxColor,backColor,0.2));
roundRect(ctx, x+5, y+5, width-5*2, height-5*2, radius, mix(boxColor,backColor,0.0));
}
//***********************************************************
function findXpositionOfAcidBase(letter, spacing, boxWidth) {
var bases = "ACGT";
var xpos = -1;
var fi = -1;
var i;
for (i=0; i<4; i++) {
if (letter == bases[i]) {
fi = i;
break;
}
}
if (fi != -1) {
xpos = (spacing + boxWidth) * fi;
}
return xpos;
}
//************************************************************
function findColorForAcidBase(letter, multicolor) {
var bases = "ACGT";
var colorA, colorC, colorG, colorT;
// A base
colorA = new Object();
colorA.r = 0;
colorA.g = 100;
colorA.b = 181;
// C base
colorC = new Object();
colorC.r = 255;
colorC.g = 148;
colorC.b = 0
// G base
colorG = new Object();
colorG.r = 197;
colorG.g = 0;
colorG.b = 124;
// T base
colorT = new Object();
colorT.r = 140;
colorT.g = 199;
colorT.b = 0;
var colors = [colorA, colorC, colorG, colorT];
if (multicolor == false)
return colors[0];
var i;
for (i=0; i<4; i++) {
if (letter == bases[i]) {
return colors[i];
}
}
}
//************************************************************
function isSeqLegal(seq) {
var i;
for (i=0; i<seq.length; i++) {
if (seq[i] != 'A' &&
seq[i] != 'C' &&
seq[i] != 'G' &&
seq[i] != 'T') {
return false;
}
}
return true;
}
//************************************************************
function visualizeDNA()
{
var canvas = document.getElementById('canvas');
var ctx2d = canvas.getContext('2d');
var dnaText = document.getElementById('text').value;
dnaText = dnaText.replace(/\s/g, '');
dnaText = dnaText.toUpperCase();
var boxHeight = 12;
var boxSpacing = 1;
var boxWidth = (canvas.width / 4) - boxSpacing;
if (dnaText.length == 0) {
alert('enter DNA sequence !');
return;
}
if ( !isSeqLegal(dnaText)) {
alert('only ACGT sequences supported !');
return;
}
canvas.height = dnaText.length * (boxHeight + boxSpacing);
var boxColor = new Object();
boxColor.r = 0;
boxColor.g = 0;
boxColor.b = 0;
var backColor = new Object();
backColor.r = 176;
backColor.g = 224;
backColor.b = 230;
var currY = boxSpacing;
var currX;
var i;
var multicolor = document.getElementById('checkbox').checked;
for (i=0; i<dnaText.length; i++) {
currX = findXpositionOfAcidBase(dnaText[i], boxSpacing, boxWidth);
var currColor = findColorForAcidBase(dnaText[i], multicolor);
drawShadedRect(ctx2d, currX, currY, boxWidth, boxHeight, 4, backColor, currColor);
currY += boxSpacing + boxHeight;
}
}
</script>
<div>
<p><canvas id="canvas" width="200" height="10" style="border:1px solid #b3b3b3; background-color:rgba(176,224,230,1);">
Your browser does not support the HTML5 canvas tag.</canvas></p>
<p><textarea id="text" rows="3" cols="30">GTGCCAGCAGCCGCGGTAATTCCAGCTCCAATAGCGTATATTAAAGTTGCTGCAGTTAAAAAG</textarea></p>
<p><input id="checkbox" type="checkbox" />Each nucleobase in different color<br /></p>
<p><button id="button" type="button" onclick="visualizeDNA()">visualize DNA sequence</button></p>
</div>
So better check if your friend has this 3 billion year sequence - otherwise you may be talking with <a href="http://en.wikipedia.org/wiki/Cylon_(Battlestar_Galactica)">Cylon</a> =)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-87352703462094164442011-10-06T10:52:00.000-07:002014-11-21T05:57:51.021-08:00iPhone game "Pong Hau K'i" source codeHave you ever had a dream to write an iPhone board game and wondered where to start from ? Or maybe you wanted some simplistic iPhone game source code to look at and learn from ? Now you have a good opportunity to solve that problem . I've decided to publish my iPhone board game <a href="http://www.1888freeonlinegames.com/iphonegames/pong-hau-k-i-7228.html">Pong Hau K'i</a><br> <a href="https://drive.google.com/file/d/0BwrFeS7BioZiOWE3MTIyMmYtMGJlNy00MGRhLWEwYWMtYjRkMDdhODlhN2Nk/view">source code & assets</a>. Use it for any purpose you wish - be it personal / educational or commercial use ...
<br>
HTH !Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-80709062886582710712011-05-27T01:39:00.000-07:002011-05-27T01:54:43.754-07:00Ellipse detection in image by using Hough transformHow we can detect ellipses in images ? One way is to use <a href="http://en.wikipedia.org/wiki/Hough_transform">Hough transform</a>. I will use Hough transform algorithm variant created by <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.8792&rep=rep1&type=pdf">Yonghong Xie and Qiang Ji</a>. That algorithm pseudo-code:<br />
<br />
<pre>1. Store all edge pixels in a one dimensional array.
2. Clear the accumulator array.
3. For each pixel (x1, y1), carry out the following steps from (4) to (14).
4. For each other pixel (x2, y2), if the distance between
(x1, y1) and (x2, y2) is greater than the required least
distance for a pair of pixels to be considered then
carry out the following steps from (5) to (14).
5. From the pair of pixels (x1, y1) and (x2, y2) calculate the center,
orientation and length of major axis for the assumed ellipse.
6. For each third pixel (x, y), if the distance between
(x, y) and (x0, y0) is greater than the required least
distance for a pair of pixels to be considered then
carry out the following steps from (7) to (9).
7. Calculate the length of minor axis.
8. Increment the accumulator for this length of minor axis by 1.
9. Loop until all pixels are computed for this pair of pixels.
10. Find the maximum element in accumulator array. The
related length is the possible length of minor axis
for assumed ellipse. If the vote is greater than the
required least number for assumed ellipse, one
ellipse is detected.
11. Output ellipse parameters.
12. Remove the pixels on the detected ellipse from edge pixel array.
13. Clear accumulator array.
14. Loop until all pairs of pixels are computed.
</pre><br />
Proof-of-concept algorithm implementation in Python:<hr><pre style="color:green">import sys
from PIL import Image,ImageFilter, ImageDraw
from math import *
# some global constants
EL_COVERAGE_RATIO = 0.9
EL_VERIFICATION_DISTANCE = 1.
EL_PATH_POINTS = 51
MIN_MINOR_FREQUENCY = 30
MIN_HALF_MAJOR = 8
MIN_HALF_MINOR = 6
def distance(p1,p2):
x1,y1 = p1
x2,y2 = p2
return sqrt((x1-x2)**2 + (y1-y2)**2)
def nonnegative(v):
return v if v >= 0 else 0
def parametricEllipse(center, a, b, angle):
xc,yc = center
elx = lambda t: xc + a * cos(t) * cos(angle) - b * sin(t) * sin(angle)
ely = lambda t: yc + a * cos(t) * sin(angle) + b * sin(t) * cos(angle)
return [(int(elx(2.*pi*x/float(EL_PATH_POINTS-1))),int(ely(2.*pi*x/float(EL_PATH_POINTS-1)))) for x in range(EL_PATH_POINTS)]
assert len(sys.argv)!=2, "missing input and/or output file !"
im = Image.open(sys.argv[1])
width, height = im.size
io = Image.new('RGB',(width, height),(255,255,255))
draw = ImageDraw.Draw(io)
# converting image to grayscale
im = im.convert('L')
# detecting edge pixels
im = im.filter(ImageFilter.FIND_EDGES)
# converting to binary image
im = im.convert('1')
pixels = im.load()
pxy = []
# extracting binary pixels coordinates
for x in range(width):
for y in range(height):
if pixels[x,y]==255:
pxy.append((x,y))
# applying Hough transform for ellipses detection.
# algorithm is taken from this paper:
# http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.8792&rep=rep1&type=pdf
cIx = -1
colors = [(255,0,0),(0,200,0),(0,0,255)]
for x1,y1 in pxy:
for x2,y2 in pxy:
bbins = {}
dist = distance((x1,y1),(x2,y2))
if dist >= 2*MIN_HALF_MAJOR:
cent = ((x1+x2)/2.,(y1+y2)/2.)
a = dist/2. # semi-length of major axis
alfa = atan2((y2 - y1),(x2 - x1))
for rx,ry in pxy:
d = distance((rx,ry),cent)
if d >= MIN_HALF_MINOR:
f = distance((rx,ry),(x2,y2))
cost = (a**2. + d**2. - f**2.)/(0.00001+2.*a*d)
b = sqrt(nonnegative((a**2. * d**2. * (1.-cost**2.))/(0.00001 + a**2. - d**2. * cost**2.))) # semi-length of minor axis
b = int(b)
if bbins.has_key(b):
bbins[b]+=1
elif b > 0:
bbins[b]=1
bbins_rev = dict([(v,k) for k,v in bbins.iteritems()])
max_freq = max(bbins_rev.keys())
bmax = bbins_rev[max_freq]
# Did we found probable ellipse ?
if max_freq >= MIN_MINOR_FREQUENCY and alfa >=0.0 and bmax >= MIN_HALF_MINOR:
elData = parametricEllipse(cent, a, bmax, alfa)
supported = []
supportRatio = 0.0
# counting how much pixels lies on ellipse path
for i in range(EL_PATH_POINTS):
elx,ely = elData[i]
added = False
for x,y in pxy:
if distance((elx,ely),(x,y)) <= EL_VERIFICATION_DISTANCE:
supported.append((x,y))
if not added:
supportRatio += 1./float(EL_PATH_POINTS)
added = True
supported = list(set(supported))
# if number of pixels on ellipse path is big enough
if supportRatio >= EL_COVERAGE_RATIO:
cIx = (cIx+1)%3
print "coverage %.2f" % supportRatio,"frequency ", max_freq, "center ", cent, "angle %.2f" % alfa, "axes (%.2f,%.2f)" % (a, bmax)
# removing founded ellipse pixels from further analysis
for p in supported:
pxy.remove(p)
# drawing founded ellipse
for i in range(EL_PATH_POINTS):
elx,ely = elData[i]
if i < EL_PATH_POINTS-1:
draw.line(elData[i] + elData[i+1], fill=colors[cIx])
io.save(sys.argv[2])
print "***************************************************************"
print "************************** DONE *******************************"
print "***************************************************************"
</pre><hr>(Prototype algorithm is slow, tested only on 50x50 images). So, by running this algo on this image:
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-0tIJbp27Q1I/Td9h95rFE-I/AAAAAAAAARA/MGg-k2Y6x-8/s1600/atom.jpg" imageanchor="1" style=""><img border="0" height="50" width="50" src="http://2.bp.blogspot.com/-0tIJbp27Q1I/Td9h95rFE-I/AAAAAAAAARA/MGg-k2Y6x-8/s400/atom.jpg" /></a></div>we will get such algorithm output:
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-KREri3aXdAE/Td9iLy0FHEI/AAAAAAAAARI/rvysi2Ihef0/s1600/out.png" imageanchor="1" style=""><img border="0" height="50" width="50" src="http://2.bp.blogspot.com/-KREri3aXdAE/Td9iLy0FHEI/AAAAAAAAARI/rvysi2Ihef0/s400/out.png" /></a></div><br />
Have fun in computer vision !Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-40968216148168853842011-05-06T00:11:00.000-07:002017-03-30T11:59:28.837-07:00Gradient transfer functionSuppose we need to draw linear gradient, but in a way which lets us to control color distribution between gradient parts. How to do that ? Answer is - gradient transfer function.<br />
<br />
Algorithm is this:<br />
1. Extract pixel's relative distance [0..1] from the start of gradient.<br />
2. Update this distance by feeding it to gradient transfer function.<br />
3. Blend source color with target color using updated distance as blend ratio.<br />
4. Set calculated color to pixel.<br />
<br />
We will use such gradient transfer function:<br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-D9g6fTjRuRg/TcOU9AUsvtI/AAAAAAAAAQI/JbwaMbmS4lc/s1600/TransferFunction.gif" imageanchor="1" style=""><img border="0" height="29" width="285" src="http://2.bp.blogspot.com/-D9g6fTjRuRg/TcOU9AUsvtI/AAAAAAAAAQI/JbwaMbmS4lc/s400/TransferFunction.gif" /></a></div>where x is pixel's relative distance from the start and a,b are some adjustable parameters.<br />
<br />
Below is Javascript implementation of this method (your browser must support HTML5 canvas element). You can try to change a,b parameters of transfer function and see what happens to gradient.<script type="text/javascript" src="https://cdn.rawgit.com/0x69/coding-experiments/master/Javascript/jquery.js"></script> <br />
<script type="text/javascript" src="https://cdn.rawgit.com/0x69/coding-experiments/master/Javascript/jquery.flot.js"></script> <br />
<script type="text/javascript" src="https://cdn.rawgit.com/0x69/coding-experiments/master/Javascript/gradTransfer.js"></script> <br />
<br />
<p> <br />
<b>a</b> <input id="rngNonlinearity" type="range" min="0.0" max="1.01" value="0.5" step="0.01" oninput="showValue(this.value,'Nonlinearity')" /><span id="Nonlinearity">0.5</span><br />
<br />
</br> <br />
<br />
<b>b</b> <input id="rngAbruptness" type="range" min="5.0" max="20.00" value="5.00" step="0.01" oninput="showValue(this.value,'Abruptness')" /><span id="Abruptness">5.00</span><br />
</p><br />
<div style="position: relative;"> <div id="placeholder" style="width:300px; height:300px; "></div> <canvas id="canvasGrad" width="274" height="50"
style="position: absolute; left: 22; top: 305; z-index: 1; border: 1px solid #000000; background-color: #EEEEEE;"><br />
Sorry, canvas not supported!<br />
</canvas><br />
</div><br />
<!-- On Load --><br />
<script type="text/javascript"> $(GeneratePlot()); </script> <br />
<br />
And here is Javascript code which does that (plot is generated with FLOT library):<br />
<hr><pre style="color: green">function showValue(newValue,el)
{
document.getElementById(el).innerHTML=parseFloat(newValue).toFixed(2);
GeneratePlot();
}
function Clamp(x,a,b) {
return Math.min(Math.max(x, a), b);
};
function NonLinearTransfer(x,a,b) {
return (1-a)*x + a*Math.pow(1+Math.exp(b-2*b*x),-1);
};
function GeneratePlot() {
var data = [];
var a = document.getElementById("rngNonlinearity").value;
var b = document.getElementById("rngAbruptness").value;
for (var i = 0; i <= 1; i += 0.01)
data.push([i, Clamp(NonLinearTransfer(i,a,b),0,1)]);
$.plot($("#placeholder"),
[{ data: data, label: "Transfer function"}],
{
xaxes: [ { min: 0, max: 1 }],
yaxes: [ { min: 0, max: 1 }],
legend: { position: 'nw' }
}
);
GenerateGrad();
};
function Blend(k,x,y) {
return (1-k)*x + k*y;
}
function setPixel(imageData, x, y, r, g, b, a) {
index = (x + y * imageData.width) * 4;
imageData.data[index+0] = r;
imageData.data[index+1] = g;
imageData.data[index+2] = b;
imageData.data[index+3] = a;
}
function GenerateGrad() {
element = document.getElementById("canvasGrad");
c = element.getContext("2d");
width = parseInt(element.getAttribute("width"));
height = parseInt(element.getAttribute("height"));
imageData = c.createImageData(width, height);
scolor = [0,255,0];
tcolor = [0,0,255];
c1 = document.getElementById("rngNonlinearity").value;
c2 = document.getElementById("rngAbruptness").value;
// draw gradient
for (x = 0; x < width; x++) {
k = x/width;
k = NonLinearTransfer(k,c1,c2);
r = Blend(k,scolor[0],tcolor[0]);
g = Blend(k,scolor[1],tcolor[1]);
b = Blend(k,scolor[2],tcolor[2]);
for (y = 0; y < height; y++) {
setPixel(imageData, x, y, r, g, b, 0xff);
}
}
c.putImageData(imageData, 0, 0);
}
</pre>
<hr>Have fun!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-49763743822089445132011-01-24T09:27:00.000-08:002012-06-04T01:54:08.305-07:00Algorithm to determine image contrastHow to determine image contrast ? More specifically - How to determine that image contrast is low and it needs automatic contrast adjustment through histogram equalization method ?<br />
<br />Algorithm is this:<br />
1. Calculate cumulative histogram of image.<br />
2. Make linear regression of cumulative histogram in the form freq(x) = A*x + B.<br />
3. Calculate RMSE of real_cumulative_frequency(x)-freq(x).<br />
4. If that RMSE is close to zero - image is already equalized and should be in good contrast. (That means for equalized images cumulative histograms must be linear)<br />
<br />Because picture is worth a thousand words - this is how cumulative histograms looks for image with different levels of contrast:<br />
<a href="http://4.bp.blogspot.com/-qJ89M1OAisg/T8x1kXdsf6I/AAAAAAAAARU/j0qjRkUbLQc/s1600/cumulative.png" imageanchor="1" style=""><img border="0" height="202" width="400" src="http://4.bp.blogspot.com/-qJ89M1OAisg/T8x1kXdsf6I/AAAAAAAAARU/j0qjRkUbLQc/s400/cumulative.png" /></a>
<br />So as you see - image with most aggressive contrast has cumulative histogram which is almost a perfect line.
<br /><br />Now fun part - C code which opens image in PGM format with ASCII encoding and detects if image needs contrast adjustment or not :
<pre style="color: green;">
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#define MAX_COLS 1000
#define MAX_ROWS 1000
typedef struct {
int Width;
int Height;
int max_value;
int data[MAX_ROWS][MAX_COLS];
} PGMdata;
void readPgmFile(char* fileName, PGMdata * pgmOut) {
FILE * pFile;
char line[1000];
char* res;
int lineNum = 0;
if ((pFile = fopen(fileName , "r")) == NULL)
printf("Error opening file %s \n", fileName);
else {
while ((res = fgets(line, 100, pFile)) != NULL) {
lineNum++;
// max value of pixel
if (lineNum==4)
sscanf(line,"%i",&pgmOut->max_value);
// width and height of image
if (lineNum==3) {
sscanf(line,"%i %i",&pgmOut->Width, &pgmOut->Height);
}
// load real pixels
if (lineNum > 4) {
int ix = lineNum - 5;
int row = ix/pgmOut->Width;
int col = ix - row*pgmOut->Width;
sscanf(line,"%i", &pgmOut->data[row][col]);
}
}
fclose(pFile);
}
}
void CumulativeHistogram(PGMdata* image, double* hist) {
int i,j;
double dp = 1.0/((double) image->Width*image->Height);
// initializing histogram bins to zero
for (i=0; i<256; i++) {
hist[i] = 0.0;
}
// calculating histogram
for (i=0; i<image->Width; i++) {
for (j=0; j<image->Height; j++) {
hist[image->data[j][i]] += dp;
}
}
// making histogram cumulative
for (i=0; i<255; i++) {
hist[i+1] += hist[i];
}
}
void CreateXmatrix(double mat[256][2]) {
int i;
for (i=0; i<256; i++) {
mat[i][0] = 1;
mat[i][1] = i;
}
}
void TransposeMatrix(double mat[256][2], double tmat[2][256]) {
int i,j;
for (i=0; i<2; i++) {
for (j=0; j<256; j++) {
tmat[i][j] = mat[j][i];
}
}
}
double MultiplyMatrixes(double A[2][256], double B[256][2], int row, int col) {
int i;
double sum = 0.0;
for (i=0; i<256; i++) {
sum += A[row][i]*B[i][col];
}
return sum;
}
double MultiplyMatrixAndVector(double A[2][256], double Y[256], int row) {
int i;
double sum = 0.0;
for (i=0; i<256; i++) {
sum += A[row][i]*Y[i];
}
return sum;
}
double HistogramPredicted(double c0, double c1, double level) {
return c0 + c1*level;
}
double RootMeanSquare(double* hist, double c0, double c1) {
double rms = 0.0;
int i;
for (i=0; i<256; i++) {
rms += pow(hist[i]-HistogramPredicted(c0,c1,i),2.0);
}
rms /= 256.0;
return sqrt(rms);
}
void LinearLeastSquares(double* hist, double* c0, double* c1) {
double X[256][2];
double tX[2][256];
double a1,a2,a3,a4;
double b1, b2;
// create matrix X composed of x'es
CreateXmatrix(X);
// transpose X matrix
TransposeMatrix(X,tX);
// calculate tX*X matrix which is
// [a1 a2]
// [a3 a4]
a1 = MultiplyMatrixes(tX,X,0,0);
a2 = MultiplyMatrixes(tX,X,0,1);
a3 = MultiplyMatrixes(tX,X,1,0);
a4 = MultiplyMatrixes(tX,X,1,1);
// calculate tX*Y (Y=HISTOGRAM) which is
// [b1]
// [b2]
b1 = MultiplyMatrixAndVector(tX,hist,0);
b2 = MultiplyMatrixAndVector(tX,hist,1);
// solve matrix equation by using elimination of variables method
// [a1 a2] [c0] [b1]
// * =
// [a3 a4] [c1] [b2]
*c1 = (a1*b2-a3*b1)/(a1*a4-a2*a3);
*c0 = (b1-a2*(*c1))/a1;
}
int main () {
double hist[256];
static PGMdata image = {0};
double c0=0.0, c1=0.0;
double rms;
char pgm_file[20] = {0};
while (1) {
// get PGM file name
printf("PGM filename: ");
scanf("%s", pgm_file);
// read grayscale image from PGM format
memset(&image, 0, sizeof(image));
readPgmFile(pgm_file, &image);
if (image.Width == 0)
continue;
// create cumulative histogram
CumulativeHistogram(&image, hist);
// least mean squares method, to find c0,c1 coefficents in equation:
// c0 + c1*gray_level = Frequency
LinearLeastSquares(hist,&c0,&c1);
// calculate RMS of Frequency[bin]-predicted_Frequency(bin)
rms = RootMeanSquare(hist,c0,c1);
// Low RMS shows that histogram frequencies are distributed in linear fashion
// between bins. This means that histogram equalization was performed on image
// and/or that global image contrast is good.
if (rms <= 0.01)
printf("\n==> %s contrast is OK (rms=%f)\n\n", pgm_file ,rms);
else
printf("\n==> %s contrast is BAD (rms=%f)\n\n", pgm_file ,rms);
}
return 0;
}
</pre>
If you want to test this algorithm on ASCII PGM image - you better covert image to PGM format with GIMP 2.6.11, because it was only tested in this way.<br />
So by running this C code on below image (after converting it to PGM)<br />
<a href="http://3.bp.blogspot.com/_hTm-LSlj8U4/TT3EuV1SDiI/AAAAAAAAAP8/j3AAwgs55zY/s1600/land.jpg"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5565821014913453602" src="http://3.bp.blogspot.com/_hTm-LSlj8U4/TT3EuV1SDiI/AAAAAAAAAP8/j3AAwgs55zY/s400/land.jpg" style="cursor: hand; cursor: pointer; height: 200px; width: 300px;" /></a><br />
we get algorithm output<br />
"<span style="font-weight: bold;">image contrast is BAD ... </span>" :-)Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-7408569574805188613.post-8649089228009412242011-01-15T09:03:00.000-08:002011-01-20T04:50:28.007-08:00Toon pixel shaderHow to make pixel shader which performs 'basic' toon shading operation on images ? <br /><br />Algorithm is following:<br />1. Convert pixel from RGB to HSV color space.<br />2. Map H,S,V values to some pre-defined set of H,S,V values.<br />3. Convert back from HSV to RGB.<br />4. Calculate if pixel is on edge, if so - ignore above calculated pixel color and place some pre-defined edge color instead.<br /><br />That's it. You will have image converted to toon-shaded variant :-)<br /><br />Here is GLSL code, which performs toon shading on image:<hr><pre style="color: green"><br />#version 150<br />uniform sampler2D Texture0;<br />varying vec2 texCoord;<br /><br />#define HueLevCount 6<br />#define SatLevCount 7<br />#define ValLevCount 4<br />float[HueLevCount] HueLevels = float[] (0.0,80.0,160.0,240.0,320.0,360.0);<br />float[SatLevCount] SatLevels = float[] (0.0,0.15,0.3,0.45,0.6,0.8,1.0);<br />float[ValLevCount] ValLevels = float[] (0.0,0.3,0.6,1.0);<br /><br />vec3 RGBtoHSV( float r, float g, float b) {<br /> float minv, maxv, delta;<br /> vec3 res;<br /><br /> minv = min(min(r, g), b);<br /> maxv = max(max(r, g), b);<br /> res.z = maxv; // v<br /> <br /> delta = maxv - minv;<br /><br /> if( maxv != 0.0 )<br /> res.y = delta / maxv; // s<br /> else {<br /> // r = g = b = 0 // s = 0, v is undefined<br /> res.y = 0.0;<br /> res.x = -1.0;<br /> return res;<br /> }<br /><br /> if( r == maxv )<br /> res.x = ( g - b ) / delta; // between yellow & magenta<br /> else if( g == maxv )<br /> res.x = 2.0 + ( b - r ) / delta; // between cyan & yellow<br /> else<br /> res.x = 4.0 + ( r - g ) / delta; // between magenta & cyan<br /><br /> res.x = res.x * 60.0; // degrees<br /> if( res.x < 0.0 )<br /> res.x = res.x + 360.0;<br /> <br /> return res;<br />}<br /><br />vec3 HSVtoRGB(float h, float s, float v ) {<br /> int i;<br /> float f, p, q, t;<br /> vec3 res;<br /><br /> if( s == 0.0 ) {<br /> // achromatic (grey)<br /> res.x = v;<br /> res.y = v;<br /> res.z = v;<br /> return res;<br /> }<br /><br /> h /= 60.0; // sector 0 to 5<br /> i = int(floor( h ));<br /> f = h - float(i); // factorial part of h<br /> p = v * ( 1.0 - s );<br /> q = v * ( 1.0 - s * f );<br /> t = v * ( 1.0 - s * ( 1.0 - f ) );<br /><br /> switch( i ) {<br /> case 0:<br /> res.x = v;<br /> res.y = t;<br /> res.z = p;<br /> break;<br /> case 1:<br /> res.x = q;<br /> res.y = v;<br /> res.z = p;<br /> break;<br /> case 2:<br /> res.x = p;<br /> res.y = v;<br /> res.z = t;<br /> break;<br /> case 3:<br /> res.x = p;<br /> res.y = q;<br /> res.z = v;<br /> break;<br /> case 4:<br /> res.x = t;<br /> res.y = p;<br /> res.z = v;<br /> break;<br /> default: // case 5:<br /> res.x = v;<br /> res.y = p;<br /> res.z = q;<br /> break;<br /> }<br /> <br /> return res;<br />}<br /><br />float nearestLevel(float col, int mode) {<br /> int levCount;<br /> if (mode==0) levCount = HueLevCount;<br /> if (mode==1) levCount = SatLevCount;<br /> if (mode==2) levCount = ValLevCount;<br /> <br /> for (int i =0; i<levCount-1; i++ ) {<br /> if (mode==0) {<br /> if (col >= HueLevels[i] && col <= HueLevels[i+1]) {<br /> return HueLevels[i+1];<br /> }<br /> }<br /> if (mode==1) {<br /> if (col >= SatLevels[i] && col <= SatLevels[i+1]) {<br /> return SatLevels[i+1];<br /> }<br /> }<br /> if (mode==2) {<br /> if (col >= ValLevels[i] && col <= ValLevels[i+1]) {<br /> return ValLevels[i+1];<br /> }<br /> }<br /> }<br />}<br /><br />// averaged pixel intensity from 3 color channels<br />float avg_intensity(vec4 pix) {<br /> return (pix.r + pix.g + pix.b)/3.;<br />}<br /><br />vec4 get_pixel(vec2 coords, float dx, float dy) {<br /> return texture2D(Texture0,coords + vec2(dx, dy));<br />}<br /><br />// returns pixel color<br />float IsEdge(in vec2 coords){<br /> float dxtex = 1.0 /float(textureSize(Texture0,0)) ;<br /> float dytex = 1.0 /float(textureSize(Texture0,0));<br /> float pix[9];<br /> int k = -1;<br /> float delta;<br /><br /> // read neighboring pixel intensities<br /> for (int i=-1; i<2; i++) {<br /> for(int j=-1; j<2; j++) {<br /> k++;<br /> pix[k] = avg_intensity(get_pixel(coords,float(i)*dxtex,<br /> float(j)*dytex));<br /> }<br /> }<br /><br /> // average color differences around neighboring pixels<br /> delta = (abs(pix[1]-pix[7])+<br /> abs(pix[5]-pix[3]) +<br /> abs(pix[0]-pix[8])+<br /> abs(pix[2]-pix[6])<br /> )/4.;<br /><br /> return clamp(5.5*delta,0.0,1.0);<br />}<br /><br />void main(void)<br />{<br /> vec4 colorOrg = texture2D( Texture0, texCoord );<br /> vec3 vHSV = RGBtoHSV(colorOrg.r,colorOrg.g,colorOrg.b);<br /> vHSV.x = nearestLevel(vHSV.x, 0);<br /> vHSV.y = nearestLevel(vHSV.y, 1);<br /> vHSV.z = nearestLevel(vHSV.z, 2);<br /> float edg = IsEdge(texCoord);<br /> vec3 vRGB = (edg >= 0.3)? vec3(0.0,0.0,0.0):HSVtoRGB(vHSV.x,vHSV.y,vHSV.z);<br /> gl_FragColor = vec4(vRGB.x,vRGB.y,vRGB.z,1.0);<br />}<br /></pre><hr><br /><br />So from this car<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_hTm-LSlj8U4/TTHWsM-crwI/AAAAAAAAAPs/B-e9sUFX7Ho/s1600/car.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 255px;" src="http://4.bp.blogspot.com/_hTm-LSlj8U4/TTHWsM-crwI/AAAAAAAAAPs/B-e9sUFX7Ho/s400/car.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5562463069665013506" /></a><br /><br />we will get this toon-car after shader is applied:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TTHXHLttwCI/AAAAAAAAAP0/JosHXP0AF4A/s1600/toonCar.bmp"><img style="cursor:pointer; cursor:hand;width: 400px; height: 253px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TTHXHLttwCI/AAAAAAAAAP0/JosHXP0AF4A/s400/toonCar.bmp" border="0" alt=""id="BLOGGER_PHOTO_ID_5562463533182861346" /></a><br /><br />Have a fun with shaders !Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-20533228829866405062010-12-08T09:16:00.000-08:002010-12-08T10:04:17.542-08:00Project Euler, problem 26Problem is stated as following:<hr><pre><br />A unit fraction contains 1 in the numerator. The decimal representation of the unit <br />fractions with denominators 2 to 10 are given:<br /><br /> 1/2 = 0.5<br /> 1/3 = 0.(3)<br /> 1/4 = 0.25<br /> 1/5 = 0.2<br /> 1/6 = 0.1(6)<br /> 1/7 = 0.(142857)<br /> 1/8 = 0.125<br /> 1/9 = 0.(1)<br /> 1/10 = 0.1<br /><br />Where 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be seen<br />that 1/7 has a 6-digit recurring cycle.<br /><br />Find the value of d < 1000 for which 1/d contains the longest recurring cycle in <br />its decimal fraction part.<br /></pre><br /><hr><br />Solution is based on idea that longest recurring cycle should have 1/p form, where p - prime number. After quick look at wikipedia about <a href="http://en.wikipedia.org/wiki/Repeating_decimal#Fractions_with_prime_denominators">repeating decimals</a> I thought that I should just search biggest prime, but at closer examination of wiki, I understood that not necessary biggest prime would result in biggest recurring cycle of 1/prime. So for the sake of mathematical integrity, I decided to implement 'correct' search of biggest recurring cycle, which is based on calculating multiplicative order of 10 modulo prime. Solution also is pretty fast - executes in about 130 ms. Below is solution code in language F#:<br /><hr><br /><pre style="color: green"><br />open System.Diagnostics<br /><br />// http://en.wikipedia.org/wiki/Modular_exponentiation<br />let modular_exponent b e m =<br /> let rec modular_exponent0 b e e0 m c0 = <br /> match () with<br /> | _ when e0 >= e -> c0<br /> | _ -> let e1 = e0 + 1 in<br /> let c1 = (c0 * b) % m in<br /> modular_exponent0 b e e1 m c1<br /> modular_exponent0 b e 0 m 1<br /><br />// http://en.wikipedia.org/wiki/Multiplicative_order<br />let multiplicative_order a n =<br /> let rec multiplicative_order0 a n k0 = <br /> let me = modular_exponent a k0 n in<br /> match me with<br /> | 1 -> k0<br /> | _ -> let k1 = k0 + 1 in<br /> multiplicative_order0 a n k1<br /> multiplicative_order0 a n 1<br /><br />// http://en.wikipedia.org/wiki/Repeating_decimal#Fractions_with_prime_denominators<br />let period_of_1divp p = multiplicative_order 10 p<br /><br />// http://en.wikipedia.org/wiki/Prime_number#Verifying_primality<br />let is_prime x = <br /> let rec is_prime0 x d0 = <br /> match () with<br /> | _ when d0 > int (sqrt (float x)) -> true<br /> | _ when x % d0 = 0 -> false<br /> | _ -> let d1 = d0 + 1 in<br /> is_prime0 x d1<br /> is_prime0 x 2<br /><br />let main = let stopwatch = new Stopwatch()<br /> stopwatch.Start()<br /> let prime,max_period = <br /> [ for x in 6..999 do if is_prime x then yield x,period_of_1divp x ] |> <br /> Seq.fold (fun (a,b) (x,y) -> if y > b then (x,y) else (a,b)) (0,0)<br /> stopwatch.Stop()<br /> printfn "Fraction with greatest period of %d is 1/%d, calculated in %d ms" <br /> max_period<br /> prime<br /> stopwatch.ElapsedMilliseconds<br /></pre><br /><hr><br />Have fun with Project Euler and F# !Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-1583571368118570682010-10-21T02:58:00.000-07:002010-10-21T03:30:29.865-07:00Thermal vision pixel shaderThis is how we can get thermal vision pixel shader:<br />1. Make some gradient (here we will use blue-yellow-red gradient).<br />2. Make thermal map texture (here we will substitute pixel luminance value for temperature)<br />3. Get pixel's temperature from thermal map texture and map it to gradient value.<br /><br />GLSL code<br /><hr><br /><pre style="color: green"><br />#version 120<br /><br />uniform sampler2D tex;<br /><br />void main()<br />{<br /> vec4 pixcol = texture2D(tex, gl_TexCoord[0].xy);<br /> vec4 colors[3];<br /> colors[0] = vec4(0.,0.,1.,1.);<br /> colors[1] = vec4(1.,1.,0.,1.);<br /> colors[2] = vec4(1.,0.,0.,1.);<br /> float lum = (pixcol.r+pixcol.g+pixcol.b)/3.;<br /> int ix = (lum < 0.5)? 0:1;<br /> vec4 thermal = mix(colors[ix],colors[ix+1],(lum-float(ix)*0.5)/0.5);<br /> gl_FragColor = thermal;<br />}<br /></pre><br /><hr><br />Tank<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_hTm-LSlj8U4/TMAS-aqX7QI/AAAAAAAAAPQ/lK2hur9sHAg/s1600/tank_night.png"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://3.bp.blogspot.com/_hTm-LSlj8U4/TMAS-aqX7QI/AAAAAAAAAPQ/lK2hur9sHAg/s400/tank_night.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5530441205929209090" /></a><br />and after thermal vision shader applied:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_hTm-LSlj8U4/TMAT9w2oeLI/AAAAAAAAAPY/pqT9HWZkteo/s1600/tank_thermal.png"><img style="cursor:pointer; cursor:hand;width: 400px; height: 398px;" src="http://2.bp.blogspot.com/_hTm-LSlj8U4/TMAT9w2oeLI/AAAAAAAAAPY/pqT9HWZkteo/s400/tank_thermal.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5530442294217963698" /></a>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-7408569574805188613.post-76312781010089704852010-07-11T02:52:00.000-07:002010-07-25T08:14:28.912-07:00Convolution Pixel ShaderWhat is convolution ? Simply speaking convolution is weighted sum of pixel values around target pixel. If those weights are put into vector <b>W</b> and pixel values around x,y are put in vector <b>P</b>, then in linear algebra terms convolution at x,y is nothing more than dot product of vectors <b>W</b>,<b>P</b>. More precisely:<br /><hr><br />Convolution(x,y) = (<b>W</b>*<b>P</b>)/denominator + offset<br /><hr><br />This <b>W</b> vector is called <i>kernel</i>. (Also formula can be re-written for matrix case, in that case W and P would be matrix). So by using different <i>kernels</i> we could achieve different image convolution effects. Several common convolution kernels:<br /><br /><i>Gaussian Blur kernel</i><br /><pre><br />1., 2., 1.,<br />2., 4., 2.,<br />1., 2., 1.<br /></pre><br /><i>Sharpness kernel</i><br /><pre><br />-1., -1., -1.,<br />-1., 9., -1.,<br />-1., -1., -1.<br /></pre><br /><i>Edge detection kernel</i><br /><pre><br />-1./8., -1./8., -1./8.,<br />-1./8., 1., -1./8.,<br />-1./8., -1./8., -1./8.<br /></pre><br /><i>Emboss kernel</i><br /><pre><br />2., 0., 0.,<br />0., -1., 0.,<br />0., 0., -1.<br /></pre><br />Now, GLSL pixel shader code which performs convolution with all these kernels =><br /><hr><br /><br /><pre style="color: green"><br />#version 150<br /><br />uniform sampler2D Texture0;<br /><br />vec4 get_pixel(in vec2 coords, in float dx, in float dy) { <br /> return texture2D(Texture0,coords + vec2(dx, dy));<br />}<br /><br />float Convolve(in float[9] kernel, in float[9] matrix, <br /> in float denom, in float offset) {<br /> float res = 0.0;<br /> for (int i=0; i<9; i++) {<br /> res += kernel[i]*matrix[i];<br /> }<br /> return clamp(res/denom + offset,0.0,1.0);<br />}<br /><br />float[9] GetData(in int channel) {<br /> float dxtex = 1.0 / float(textureSize(Texture0,0)); <br /> float dytex = 1.0 / float(textureSize(Texture0,0));<br /> float[9] mat;<br /> int k = -1;<br /> for (int i=-1; i<2; i++) { <br /> for(int j=-1; j<2; j++) { <br /> k++; <br /> mat[k] = get_pixel(gl_TexCoord[0].xy,float(i)*dxtex,<br /> float(j)*dytex)[channel];<br /> }<br /> }<br /> return mat;<br />}<br /><br />float[9] GetMean(in float[9] matr, in float[9] matg, in float[9] matb) {<br /> float[9] mat;<br /> for (int i=0; i<9; i++) {<br /> mat[i] = (matr[i]+matg[i]+matb[i])/3.;<br /> }<br /> return mat;<br />}<br /><br />void main(void)<br />{<br /> float[9] kerEmboss = float[] (2.,0.,0.,<br /> 0., -1., 0.,<br /> 0., 0., -1.);<br /><br /> float[9] kerSharpness = float[] (-1.,-1.,-1.,<br /> -1., 9., -1.,<br /> -1., -1., -1.);<br /><br /> float[9] kerGausBlur = float[] (1.,2.,1.,<br /> 2., 4., 2.,<br /> 1., 2., 1.);<br /><br /> float[9] kerEdgeDetect = float[] (-1./8.,-1./8.,-1./8.,<br /> -1./8., 1., -1./8.,<br /> -1./8., -1./8., -1./8.);<br /><br /> float matr[9] = GetData(0);<br /> float matg[9] = GetData(1);<br /> float matb[9] = GetData(2);<br /> float mata[9] = GetMean(matr,matg,matb);<br /><br /> // Sharpness kernel<br /> //gl_FragColor = vec4(Convolve(kerSharpness,matr,1.,0.),<br /> // Convolve(kerSharpness,matg,1.,0.),<br /> // Convolve(kerSharpness,matb,1.,0.),1.0);<br /><br /> // Gaussian blur kernel<br /> //gl_FragColor = vec4(Convolve(kerGausBlur,matr,16.,0.),<br /> // Convolve(kerGausBlur,matg,16.,0.),<br /> // Convolve(kerGausBlur,matb,16.,0.),1.0);<br /><br /> // Edge Detection kernel<br /> //gl_FragColor = vec4(Convolve(kerEdgeDetect,mata,0.1,0.),<br /> // Convolve(kerEdgeDetect,mata,0.1,0.),<br /> // Convolve(kerEdgeDetect,mata,0.1,0.),1.0);<br /><br /> // Emboss kernel<br /> gl_FragColor = vec4(Convolve(kerEmboss,mata,1.,1./2.),<br /> Convolve(kerEmboss,mata,1.,1./2.),<br /> Convolve(kerEmboss,mata,1.,1./2.),1.0);<br /><br />}<br /></pre><br /><br /><hr><br />What's left ? Results of course :)<br />Here is original image<br /><a href="http://3.bp.blogspot.com/_hTm-LSlj8U4/TDmhQksZWrI/AAAAAAAAAOM/ArstEfAfJG4/s1600/tarant.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 296px;" src="http://3.bp.blogspot.com/_hTm-LSlj8U4/TDmhQksZWrI/AAAAAAAAAOM/ArstEfAfJG4/s400/tarant.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5492598526654896818" /></a><br />convolution with Gaussian Blur kernel<br /><a href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TDmhnXSQ22I/AAAAAAAAAOU/PKv-fSF1DjM/s1600/tarantGaussBlur.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 292px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TDmhnXSQ22I/AAAAAAAAAOU/PKv-fSF1DjM/s400/tarantGaussBlur.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5492598918192618338" /></a><br />convolution with Sharpness kernel<br /><a href="http://4.bp.blogspot.com/_hTm-LSlj8U4/TDmh0hwqcoI/AAAAAAAAAOc/MVzxkxrtKmE/s1600/tarantSharpness.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 292px;" src="http://4.bp.blogspot.com/_hTm-LSlj8U4/TDmh0hwqcoI/AAAAAAAAAOc/MVzxkxrtKmE/s400/tarantSharpness.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5492599144342778498" /></a><br />convolution with Edge detection kernel<br /><a href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TDmiCRlYoDI/AAAAAAAAAOk/acZLd7fI1Lw/s1600/tarantEdgeDet.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 292px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TDmiCRlYoDI/AAAAAAAAAOk/acZLd7fI1Lw/s400/tarantEdgeDet.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5492599380518674482" /></a><br />convolution with Emboss kernel<br /><a href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TDmiTZrcE8I/AAAAAAAAAOs/QBajXZhi3Lg/s1600/tarantEmboss.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 292px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TDmiTZrcE8I/AAAAAAAAAOs/QBajXZhi3Lg/s400/tarantEmboss.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5492599674749326274" /></a>Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-7408569574805188613.post-58216501648875585302010-06-25T07:50:00.000-07:002010-07-25T08:19:24.434-07:00Steganography Pixel ShaderSteganography is method for hidding secret message in covert message. In this case I mean hidding secret image into covert image. So, sometimes What You See Is NOT What You Get :) How can we hide secret 3-bit image into other 24-bit image ?<br /><br /><strong>Encoding procedure</strong><br /><br />1.<br />secret 3-bit image means that we can hide 2^3 = 8 color palette image.<br />So at first we need to map these 8 colors to 3-bit pattern. Lets use such mappings:<br /><pre><br />--------------------------<br /> RGB | Bit pattern<br />--------------------------<br />2,2,2 | 0,0,0<br />38,38,38 | 0,0,1<br />74,74,74 | 0,1,0<br />110,110,110 | 1,0,0<br />146,146,146 | 0,1,1<br />182,182,182 | 1,1,0<br />218,218,218 | 1,0,1<br />254,254,254 | 1,1,1<br />--------------------------<br /></pre><br />2.<br />Now as we have color table, we just need to get 3-bit image required pixel and encode it's 3-bit pattern into covert RGB image. One way of doing this is to define secret bit meaning as follows: 0 means covert RBG byte is even, 1 means - odd.<br />So according to this definition we adjust covert image RGB bytes to be even or odd - depending to required 3-bit secret pattern. For example-<br />if 3-bit pattern is (1,0,1) and covert RGB pixel is (140,39,16) - then it will be converted to (141,40,17).<br /><br /><strong>Decoding procedure</strong><br />Secret image extraction from covert image procedure is a reversal of encoding:<br />1. Check covert RGB bytes are even or odd.<br />2. From that extract 3-bit pattern.<br />3. Map this 3-bit pattern to 8 color palette.<br /><br /><strong>Properties of this image hidding method</strong><br />1. Can be used in image formats which doesn't support alpha (transparency) channel.<br />2. Original image looses only 3/24 = 12.5 percents of quality, which means that it is hard or impossible to spot by eyes that something is wrong with covert image.<br />3. Regardless of good image quality after conversion, image histogram can show some signs of payload image. Because in some cases histogram is modulated after conversion.<br /><br />Now code examples. Encoding part is left as exercise to the reader :) But here it is payload image extraction code (as always in GLSL shader language):<br />a) Vertex Shader (we need it for correct sampling of texels)<br /><hr><br /><pre style="color: green"><br />varying vec2 texCoord;<br /><br />void main(void)<br />{<br /> gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0 );<br /> texCoord = 0.5 * gl_Position.xy + vec2(0.5); <br />}<br /></pre><br /><hr><br /><br />b) Pixel Shader (real program doing payload image extraction)<br /><hr><br /><pre style="color: green"><br />#version 120<br />uniform sampler2D Texture0;<br />varying vec2 texCoord;<br /><br />int binaryToDecimal(in int d1, in int d2, in int d3) {<br /> return 4*d1+2*d2+d3;<br />}<br /><br />void fillColors(inout float[8] colors) {<br /> colors[0] = 2./255.;<br /> colors[1] = 38./255.;<br /> colors[2] = 74./255.;<br /> colors[3] = 146./255.;<br /> colors[4] = 110./255.;<br /> colors[5] = 218./255.;<br /> colors[6] = 182./255.;<br /> colors[7] = 254./255.;<br />}<br /><br />int Odd(in float num) {<br /> return int(mod(num,2.)!=0.);<br />}<br /><br />void main()<br />{ <br /> float colTable[8];<br /> fillColors(colTable);<br /> vec4 col = texture2D(Texture0, texCoord);<br /> int d1 = Odd(col.r*255.);<br /> int d2 = Odd(col.g*255.);<br /> int d3 = Odd(col.b*255.);<br /> float level = colTable[binaryToDecimal(d1,d2,d3)];<br /> gl_FragColor = vec4(level,level,level,1.0);<br />}<br /></pre><br /><hr><br /><strong>NOTE: When using this GLSL shader for image below - make sure that covert texture is rendered to screen aligned quad of EXACTLY 512x512 pixel dimensions. Otherwise you will not get hidden image, but instead you will get just noise because of incorrect pixel/texel samplings !!!</strong><br /><br />Finally,- results. This is covert image:<br /><a href="http://4.bp.blogspot.com/_hTm-LSlj8U4/TCTtEiYr_fI/AAAAAAAAAN8/hR8H2rf215s/s1600/sheep.png"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://4.bp.blogspot.com/_hTm-LSlj8U4/TCTtEiYr_fI/AAAAAAAAAN8/hR8H2rf215s/s400/sheep.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5486770908249718258" /></a><br />and this is payload image extracted from above image (after extraction GLSL shader applied):<br /><a href="http://2.bp.blogspot.com/_hTm-LSlj8U4/TCTt2_dkZ1I/AAAAAAAAAOE/wJJESxjc1kc/s1600/hiddenWolf.PNG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://2.bp.blogspot.com/_hTm-LSlj8U4/TCTt2_dkZ1I/AAAAAAAAAOE/wJJESxjc1kc/s400/hiddenWolf.PNG" border="0" alt=""id="BLOGGER_PHOTO_ID_5486771775048279890" /></a><br /><br />Have fun in making/analyzing covert images !Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-68862236098595064942010-06-17T06:33:00.000-07:002010-07-25T08:21:13.624-07:00Binary filter Pixel ShaderBinary image is image composed only from 2 colors. Typical conversion algorithm to binary image is this -> If pixel's averaged intensity is greater than threshold - draw pixel in first color, otherwise- draw it in second color. But you can define other methods of conversion to binary as well. <br />For example another method can be - If pixel's color is near target color (within error bounds) - draw pixel in first color, otherwise - draw it in second. This conversion rule is implemented in following GLSL pixel shader code:<br /><hr><br /><pre style="color: green"><br />uniform sampler2D tex;<br /><br />bool nearColor(in vec4 col, in vec4 tcol, in float error) {<br /> return abs(col.r-tcol.r) < error &&<br /> abs(col.g-tcol.g) < error &&<br /> abs(col.b-tcol.b) < error ;<br />}<br /><br />void main()<br />{<br /> vec4 xcol = texture2D(tex, gl_TexCoord[0].xy);<br /> vec4 scol = vec4(0.737,0.506,0.404,1.0);<br /> if (nearColor(xcol,scol,0.085))<br /> gl_FragColor = scol;<br /> else<br /> gl_FragColor = vec4(0.0,0.0,0.0,1.0);<br />}<br /></pre><br /><hr><br />Image<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_hTm-LSlj8U4/TBopPQ1oD4I/AAAAAAAAANs/FlNSbSigV0s/s1600/vikinglander.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://2.bp.blogspot.com/_hTm-LSlj8U4/TBopPQ1oD4I/AAAAAAAAANs/FlNSbSigV0s/s400/vikinglander.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5483740838471667586" /></a><br />converted to binary mode:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TBophOOY4BI/AAAAAAAAAN0/tvJGpwSoD2M/s1600/vikinglander_binary.jpg"><img style="cursor:pointer; cursor:hand;width: 399px; height: 400px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TBophOOY4BI/AAAAAAAAAN0/tvJGpwSoD2M/s400/vikinglander_binary.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5483741147007868946" /></a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-78597610656605255652010-06-15T01:55:00.000-07:002010-07-25T08:21:33.742-07:00Self-projection Pixel ShaderImage is several times scaled and projected on itself. This concrete GLSL pixel shader projects image on itself 2 times:<br /><hr><br /><pre style="color: green"><br />uniform sampler2D tex;<br /><br />bool inRectangle(in vec2 xloc, in vec2 loc, in vec2 size) {<br /> return xloc[0] >= loc[0] && <br /> xloc[1] >= loc[1] && <br /> xloc[0] <= loc[0]+size[0] && <br /> xloc[1] <= loc[1]+size[1];<br />}<br /><br />void main()<br />{<br /> vec2 start1 = vec2(0.19,0.11);<br /> vec2 size1 = vec2(0.495,0.285);<br /> vec2 start2 = start1 + start1*size1; <br /> vec2 size2 = size1*size1;<br /> if (inRectangle(gl_TexCoord[0].xy, start2, size2))<br /> gl_FragColor = texture2D(tex, (gl_TexCoord[0].xy - start2)/size2);<br /> else if (inRectangle(gl_TexCoord[0].xy, start1, size1))<br /> gl_FragColor = texture2D(tex, (gl_TexCoord[0].xy - start1)/size1);<br /> else<br /> gl_FragColor = texture2D(tex, gl_TexCoord[0].xy);<br />}<br /></pre><br /><hr><br />Image<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_hTm-LSlj8U4/TBdByUEh07I/AAAAAAAAANc/dYkdL3ToNTo/s1600/tv.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://4.bp.blogspot.com/_hTm-LSlj8U4/TBdByUEh07I/AAAAAAAAANc/dYkdL3ToNTo/s400/tv.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5482923403983836082" /></a><br />and self-projected version of it<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_hTm-LSlj8U4/TBdCJLugF1I/AAAAAAAAANk/ks9vaaZ9DPQ/s1600/tv_selfprojected.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://2.bp.blogspot.com/_hTm-LSlj8U4/TBdCJLugF1I/AAAAAAAAANk/ks9vaaZ9DPQ/s400/tv_selfprojected.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5482923796880955218" /></a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-52952816524766485872010-06-14T01:17:00.000-07:002010-07-25T08:22:04.121-07:00Pixelation Pixel ShaderPixelation is process when pixel at x,y is duplicated into x+dx,y+dy rectangle. Pixelation GLSL fragment code:<br /><hr><br /><pre style="color: green"><br />uniform sampler2D tex;<br /><br />void main()<br />{<br /> float dx = 15.*(1./512.);<br /> float dy = 10.*(1./512.);<br /> vec2 coord = vec2(dx*floor(gl_TexCoord[0].x/dx),<br /> dy*floor(gl_TexCoord[0].y/dy));<br /> gl_FragColor = texture2D(tex, coord);<br />}<br /></pre><br /><hr><br />Smart<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_hTm-LSlj8U4/TBXnDYJes4I/AAAAAAAAANM/FIQ7I1ZMjeo/s1600/smart.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://3.bp.blogspot.com/_hTm-LSlj8U4/TBXnDYJes4I/AAAAAAAAANM/FIQ7I1ZMjeo/s400/smart.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5482542166601282434" /></a><br />and pixelated version of it<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_hTm-LSlj8U4/TBXnUM5oDoI/AAAAAAAAANU/O7ra35CXizE/s1600/smart_pixelated.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 398px;" src="http://4.bp.blogspot.com/_hTm-LSlj8U4/TBXnUM5oDoI/AAAAAAAAANU/O7ra35CXizE/s400/smart_pixelated.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5482542455639772802" /></a>Unknownnoreply@blogger.com6tag:blogger.com,1999:blog-7408569574805188613.post-19462622721459033452010-06-11T06:03:00.001-07:002010-07-25T08:23:58.316-07:00Fog Pixel ShaderEach pixel is blended with fog color. The bigger pixel distance from observer location - the more pixel color approaches fog color. GLSL fragment code:<br /><br /><hr><br /><pre style="color: green"><br /><br />uniform sampler2D tex;<br /><br />void main()<br />{<br /> float FogDensity = 10.;<br /> vec4 FogColor = vec4(0.4,0.2,0.2,1.0);<br /> vec4 CurrentColor = texture2D(tex, gl_TexCoord[0].xy);<br /><br /> // distance to target<br /> float FogDistance = distance(vec2(0.49,0.46),gl_TexCoord[0].xy);<br /> <br /> // fog factor<br /> float FogFactor = exp(-abs(FogDistance * FogDensity));<br /><br /> // linear blend between fog color and pixel color<br /> gl_FragColor = mix(FogColor,CurrentColor,FogFactor);<br />}<br /></pre><br /><hr><br /><br />Image<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_hTm-LSlj8U4/TBI2BlNbDGI/AAAAAAAAAM4/IJPVWv2FETw/s1600/clock_prague.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://2.bp.blogspot.com/_hTm-LSlj8U4/TBI2BlNbDGI/AAAAAAAAAM4/IJPVWv2FETw/s400/clock_prague.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5481503097259494498" /></a><br />and after fog shader applied:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_hTm-LSlj8U4/TBI2tLgnR1I/AAAAAAAAANA/TwyGey9cC8k/s1600/clock_prague_fog.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://4.bp.blogspot.com/_hTm-LSlj8U4/TBI2tLgnR1I/AAAAAAAAANA/TwyGey9cC8k/s400/clock_prague_fog.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5481503846274910034" /></a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-58562529729167041932010-06-09T07:31:00.001-07:002010-07-25T08:25:11.818-07:00Frosted glass Pixel ShaderWe can get frosted glass effect by shifting pixel location with pseudo-random vector, such as after shift respective pattern emerges. Frosted glass GLSL pixel shader code:<br /><br /><hr><br /><pre style="color: green"><br />uniform sampler2D tex;<br /><br />float rand(vec2 co){<br /> return fract(sin(dot(co.xy ,vec2(92.,80.))) + <br /> cos(dot(co.xy ,vec2(41.,62.))) * 5.1);<br />}<br /><br />void main()<br />{<br /> vec2 rnd = vec2(rand(gl_TexCoord[0].xy),rand(gl_TexCoord[0].xy));<br /> gl_FragColor = texture2D(tex, gl_TexCoord[0].xy+rnd*0.05);<br />}<br /></pre><br /><hr><br /><br />Original image <br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_hTm-LSlj8U4/TA-qEnNbweI/AAAAAAAAAMo/0jpieaUER1E/s1600/Boxer_dog.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://3.bp.blogspot.com/_hTm-LSlj8U4/TA-qEnNbweI/AAAAAAAAAMo/0jpieaUER1E/s400/Boxer_dog.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5480786267754709474" /></a><br />and processed with frosted glass filter<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_hTm-LSlj8U4/TA-q0nguvMI/AAAAAAAAAMw/Dr4mXcH1-z8/s1600/Boxer_dog_frosted.jpg"><img style="cursor:pointer; cursor:hand;width: 398px; height: 400px;" src="http://2.bp.blogspot.com/_hTm-LSlj8U4/TA-q0nguvMI/AAAAAAAAAMw/Dr4mXcH1-z8/s400/Boxer_dog_frosted.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5480787092469365954" /></a>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-7408569574805188613.post-55514184382106044072010-06-07T04:17:00.001-07:002010-07-25T08:25:31.987-07:00Explosion and implosion Pixel ShaderExplosion/Implosion filter can be modeled by shifting pixel outwards/towards the center of image by value which is proportional to pixel distance from the center of image.<br />Explosion/Implosion pixel shader code in GLSL:<br /><br /><hr><br /><pre style="color: green"><br />uniform sampler2D tex;<br /><br />void main()<br />{<br /> vec2 cen = vec2(0.5,0.5) - gl_TexCoord[0].xy;<br /> vec2 mcen = - // delete minus for implosion effect<br /> 0.07*log(length(cen))*normalize(cen);<br /> gl_FragColor = texture2D(tex, gl_TexCoord[0].xy+mcen);<br />}<br /></pre><br /><hr><br /><br />By applying this filter to image below<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_hTm-LSlj8U4/TAzX1P0gfkI/AAAAAAAAAMQ/KD2Nn50I10A/s1600/power.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 366px;" src="http://3.bp.blogspot.com/_hTm-LSlj8U4/TAzX1P0gfkI/AAAAAAAAAMQ/KD2Nn50I10A/s400/power.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5479992156382854722" /></a><br />we get such explosion effect<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TAzYY7WbH-I/AAAAAAAAAMY/JzvQqgEG3-U/s1600/power_exp.jpg"><img style="cursor:pointer; cursor:hand;width: 399px; height: 400px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TAzYY7WbH-I/AAAAAAAAAMY/JzvQqgEG3-U/s400/power_exp.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5479992769363255266" /></a><br />and such implosion effect<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TAzYr5NzD-I/AAAAAAAAAMg/YH74eggnT0I/s1600/power_imp.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TAzYr5NzD-I/AAAAAAAAAMg/YH74eggnT0I/s400/power_imp.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5479993095207718882" /></a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7408569574805188613.post-72906571902595910162010-06-03T23:56:00.001-07:002010-07-25T08:28:34.295-07:00Edge detection Pixel ShaderThis time i will write short edge detection tutorial for OpenGL GLSL language. Suppose we have 9 pixels such as below :<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TAimHrrMJKI/AAAAAAAAALw/lHT8epk6tBA/s1600/edge_pix.png"><img style="cursor:pointer; cursor:hand;width: 200px; height: 200px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TAimHrrMJKI/AAAAAAAAALw/lHT8epk6tBA/s400/edge_pix.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5478811597609378978" /></a><br />We want to find out intensity of pixel I_x after applying edge detection filter. In simple edge detection model intensity I_x can be defined as:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_hTm-LSlj8U4/TAipzN9e1rI/AAAAAAAAAL4/5z-69472yLk/s1600/edge_form.png"><img style="cursor:pointer; cursor:hand;width: 361px; height: 42px;" src="http://1.bp.blogspot.com/_hTm-LSlj8U4/TAipzN9e1rI/AAAAAAAAAL4/5z-69472yLk/s400/edge_form.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5478815644082165426" /></a><br />That is - for being able to calculate pixel intensity we need to find out intensity differences between neighboring pixels and average them. Pixel shader program which implements this idea is given below (In GLSL language):<br /><hr><br /><pre style="color: green"><br />uniform sampler2D tex;<br /><br />float threshold(in float thr1, in float thr2 , in float val) {<br /> if (val < thr1) {return 0.0;}<br /> if (val > thr2) {return 1.0;}<br /> return val;<br />}<br /><br />// averaged pixel intensity from 3 color channels<br />float avg_intensity(in vec4 pix) {<br /> return (pix.r + pix.g + pix.b)/3.;<br />}<br /><br />vec4 get_pixel(in vec2 coords, in float dx, in float dy) {<br /> return texture2D(tex,coords + vec2(dx, dy));<br />}<br /><br />// returns pixel color<br />float IsEdge(in vec2 coords){<br /> float dxtex = 1.0 / 512.0 /*image width*/;<br /> float dytex = 1.0 / 512.0 /*image height*/;<br /> float pix[9];<br /> int k = -1;<br /> float delta;<br /><br /> // read neighboring pixel intensities<br /> for (int i=-1; i<2; i++) {<br /> for(int j=-1; j<2; j++) {<br /> k++;<br /> pix[k] = avg_intensity(get_pixel(coords,float(i)*dxtex,<br /> float(j)*dytex));<br /> }<br /> }<br /><br /> // average color differences around neighboring pixels<br /> delta = (abs(pix[1]-pix[7])+<br /> abs(pix[5]-pix[3]) +<br /> abs(pix[0]-pix[8])+<br /> abs(pix[2]-pix[6])<br /> )/4.;<br /><br /> return threshold(0.25,0.4,clamp(1.8*delta,0.0,1.0));<br />}<br /><br />void main()<br />{<br /> vec4 color = vec4(0.0,0.0,0.0,1.0);<br /> color.g = IsEdge(gl_TexCoord[0].xy);<br /> gl_FragColor = color;<br />}<br /></pre><br /><hr><br />So by using this shader this image =><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_hTm-LSlj8U4/TAitFw1cwpI/AAAAAAAAAMA/Ooclrjg-G-o/s1600/astronaut.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://2.bp.blogspot.com/_hTm-LSlj8U4/TAitFw1cwpI/AAAAAAAAAMA/Ooclrjg-G-o/s400/astronaut.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5478819261216244370" /></a><br />is transformed into this =><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_hTm-LSlj8U4/TAitqE9F25I/AAAAAAAAAMI/S326OGOjTBQ/s1600/astronaut_edges.png"><img style="cursor:pointer; cursor:hand;width: 400px; height: 398px;" src="http://4.bp.blogspot.com/_hTm-LSlj8U4/TAitqE9F25I/AAAAAAAAAMI/S326OGOjTBQ/s400/astronaut_edges.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5478819885092297618" /></a><br /><br />For more advanced algorithms on edge detection - start <a href="http://en.wikipedia.org/wiki/Edge_detection">here</a>.<br /><br />Have fun with pixel shaders !Unknownnoreply@blogger.com3