Reinforcement Learning applied to Forex Trading

It is already well-known that in 2016, the computer program AlphaGo became the first Go AI to beat a world champion Go player in a five-game match. AlphaGo utilizes a combination of reinforcement learning and Monte Carlo tree search algorithm, enabling it to play against itself and for self-training. This no doubt inspired numerous people around the world, including me. After constructing the automated forex trading system, I decided to implement reinforcement learning for the trading model and acquire real-time self-adaptive ability to the forex environment.

Environment Setup

The model runs on a Windows 10 OS (i9-9900K CPU) with DDR4 2666MHz 16G RAM and NVIDIA GeForce RTX 2060 GPU. Tensorflow is used for constructing the artificial neural network (ANN), and a multilayer perceptron (MLP) is used. The code is modified from the Frozen-Lake example of reinforcement learning using Q-Networks. The model training process follows the Q-learning algorithm (off-policy TD control), which is illustrated in Fig. 1.

Figure 1. Algorithm for Q-learning and the agent-environment interaction in a Markov decision process (MDP) [1].

For each step, the agent first observes the current state, feeds the state values into the MLP and outputs an action that is estimated to attain the highest reward, performs that action on the environment, and fetches the true reward for correcting its parameters. The agent follows the epsilon-greedy policy (ε = 0.1) for striking a balance between exploration and exploitation.

State, Action and Reward

For the 1st generation, price values at certain time points and technical indicators are used for constructing the states. The technical indicators used are the exponential moving average (EMA) and Bollinger bands (N=20, k=2), and time frames of 1, 5 and 15min are used with the last 10 time points being recorded. A total number of 36 inputs are connected to the MLP.

There are three action values for the agent: buy, sell and do nothing. The action being taken by the agent is determined by the corresponding three outputs of the MLP, where sigmoid activation functions are used for mapping the outputs to a value range of 0 ~ 1, representing the probability of the agent taking that action.

For the reward function, the difference between the trade price (the price when a buy/sell action is taken) and the averaged future price is considered. If a buy action is taken, then the reward function is calculated by subtracting the averaged future price with the trade price; if a sell action is taken then the reward is calculated the other way around. For “do nothing” actions, the reward is 0. A spread is subtracted from the reward for buy/sell actions to obtain the final reward. This prevents the agent to perform actions that result in insignificant profit, which would likely lead to a loss for real trades (Fig. 2).

Figure 2. Reward calculation method for buy/sell actions.

Noisy Sine Function Test

For preliminary verification of effectiveness for the training model and methods, a noisy sine wave is generated with Brownian motion of offset and distortion in frequency. This means at a certain time point (min), the price is determined by the following equation:

$$P(t)=P_{bias} + P_{amp} sin{2\pi \over T}t+P_{noise}$$

where Pbias is an offset value with Brownian motion, Pamp is the price vibration amplitude, T is the period with fluctuating values, and Pnoise is the noise of the price with randomly generated values. (Note that the “price” mentioned here is defined as the exchange rate between two currencies)

Fig. 3 shows a randomly generated price vs time sequence within a range of 50,000 minutes with an initial values Pbias = 1.0, T = 120 min, Pamp = 0.005, and Pnoise amplitude = 0.001. Generally, the price seems to fluctuate randomly with no obvious highs or lows. However, if it is viewed close-up, waves with clear highs and lows can be observed (Fig. 4).

Figure 3. Price vs time of the noisy sine wave from 0 to 50,000 min.

Figure 4. Price vs time of the noisy sine wave from 20000 to 20600 min.

The whole time period is 1,000,000 min (approximately 700 days, or 2 years). Initially, a random time period is set for the environment. Every time the agent takes an action, there is a certain chance (= 1%) that the time will jump to another random point within the whole period. Otherwise, the time will move on to a random point which is around 1 ~ 2 day(s) in the future. This setting is expected to correspond to real conditions, where a profitable strategy can have stable earnings and can also adapt quickly to rapid changing environments.

Fig. 5 plots the cumulative profit for trading using the noisy sine wave signal for 50,000 steps. Although it took approximately 25,000 steps to make the model get “on track”, I recognize this result as an important start for implementing real data.

Figure 5. Cumulative profit from trading using a noisy sine wave signal.

Fundamental Analysis for Economic Events

Fundamental analysis is a tricky part in forex trading, since economic events not only correlate with each other, but also might have opposite effects on the price at different conditions. In this project, I extracted the events that are considered significant, and contain previous, forecast and actual values for analysis. Data from 14 countries of the past 10 years are downloaded and columns with incomplete values are abandoned, making a complete table of economic events.

Because different events have different impacts on forex, the price change after the occurrence of an event is monitored, and a correlation between each event and the seven major pairs (commodity pairs). Table 1 displays a portion of the correlation table for different economic events. The values are positive, which indicates the significance of an event on the currency pair. Here, a pair is denoted by the currency other than the USD (e.g. USD/JPY is denoted as JPY).

Table 1. Correlation table between 14 events and 5 currency pairs. Here, a pair is abbreviated as the currency other than the USD.

Country Economic Event (Index)AUDCADEURGBPJPY
AUDCommodity Prices0.00313 0.00268 0.00266 0.00339 0.00278
AUDMI Inflation Expectations0.003380.001680.002170.002000.00266
AUDRBA Interest Rate Decision0.004280.002620.002580.002980.00225
EURManufacturing PMI0.003310.002840.002630.002980.00278
EURItalian CPI0.003150.003190.002950.003160.00255
EURServices PMI0.003410.002900.002930.002950.00284
EURCPI0.003040.002940.002620.003170.00241
EURGerman Unemployment Rate 0.003150.003150.002730.003130.00246
EURECB President Trichet Speaks0.003440.002480.003410.003020.00268
EURGerman Unemployment Change 0.003130.003130.002680.003070.00243
EURGerman Trade Balance0.003060.002550.003000.002840.00268
EURGerman Factory Orders0.002920.002650.003120.002800.00275
EURGerman Retail Sales0.003040.003040.003530.003100.00275
EURFrench Trade Balance0.003120.003120.002960.003010.00299

A total of 983 events are analyzed. However, due to the fact that a large portion of events have little influence on the price, only 125 events that have a relatively significant impact are selected as the inputs of the MLP.

Real Data Implementation Results

Per-minute exchange rate data of the seven currency pair is downloaded from histdata.com. A period from 2010 to 2019 is extracted, and blank values are filled by interpolation. This gives us a total of approximately 23 million records of price data (note that weekends have no forex data records), and is deemed sufficient for model training. The data is integrated into a table, and technical indices are calculated using ta, a technical analysis library for Python built on Pandas and Numpy.

Figure 6. EUR/USD exchange rate from 2010 to 2019.

Summing the inputs from technical analysis, fundamental analysis, and pure price data, a total of 1049 inputs are fed into the MLP. Within the hidden layers, ReLU activation is used, and a sigmoid activation function is used for the output layer. The output has a shape of 7×3, which represents the probability of the seven currency pairs and the three actions (buy, sell, do nothing).

Fig. 7 shows the accumulative profit from 2,000,000 steps in a single episode and its win rate (percentage of profitable trades within a moving average). An increasing spread value from 0.00001 to 0.00004 is applied, which the spread value starts from 0.00001 and increases by 0.00001 every 50,000 step. It can be seen that overall, the accumulative profit rises steadily. However, the win rate usually falls below the 50% line. How could a profitable trading strategy be possible? This is due to the fact that the average profit of a winning trade (=0.003736) is larger than the average loss of a losing trade (=0.003581). Thus, the overall result is a profitable trading strategy.

Figure 7. Accumulative profit and win rate from the training procedure of 2,000,000 steps.

Conclusion

In conclusion, a trading model for profitable forex trading is developed using reinforcement learning. The model can automatically adapt to dynamic environments to maximize its profits. Although for real conditions that have a larger spread, the model hasn’t achieved a stable and profitable result, the potential for optimizing is promising. In the future, I am planning to integrate this trading model with the automated forex trading system that I have made, and become a competitive player in this fascinating game of forex.

[Source code of RL model training section]

References

[1] R.S. Sutton, A.G. Barto, Reinforcement Learning: An Introduction, MIT Press2018.

Automated Microfluidic Controlling Platform

I have always wanted to automate the cumbersome experimental operation process. For biochemical experiments, often a whole day in web lab is spent in order just to acquire one single set of data. Having the hands-on experience of developing hardware and software integrated systems for several projects (e.g. Surface Plasmon Resonance Platform, Real-time Impedance Detection Systems), I initiated this project for constructing a microfluidic controlling platform that can automatically manipulate liquid-based solutions of little volume, and also assist real-time detection experiments.

Platform Structure

The platform is constructed by four sections: A Raspberry Pi, an actuator module that consists of an Arduino and two H-bridge circuits, a fluid controlling system made up of a syringe pump/syringe device, and the microfluidic platform (Fig. 1).

Figure 1. System architecture of the automated microfluidic controlling platform.

Here, the Raspberry Pi serves as the main processing unit, which an Apache web server is constructed on, and is used to communicate with a remote user by website. The front-end of the website is designed using HTML, Javascript and CSS, and the back-end is designed using PHP. Javascript and PHP communicate using jQuery, and the PHP code is written for controlling peripheral devices.

For x/y position control, the Raspberry Pi sends data through type-B USB to the Arduino, which afterwards commands the H-bridge circuits, then control the x/y stepper motors on the microfluidic platform for moving the position of the racket. The servo motor is used to control the high/low position of the tube it holds, where the high (low) position means the tube isn’t (is) inserted into the microtube (Fig. 2).

Figure 2. The microfluidic platform and surrounding modules.

For fluid control, the Raspberry Pi sends an infuse/withdraw signal to the syringe pump using another type-B USB, which subsequently pushes/pulls the syringe on it. A simple flow for moving liquid from microtube A to microtube B is:

1. Confirm that the tube (for fluid conveyance) position is high. If not, then move it up by servo motor.
2. Move to microtube A by stepper motor.
3. Change tube position to low by servo motor.
4. Withdraw liquid by syringe pump.
5. Change tube position to high by servo motor.
6. Move to microtube B by stepper motor.
7. Change tube position to low by servo motor.
8. Infuse liquid by syring pump.

Hardware Development

Fig. 3 shows a photograph of the whole platform. For the microcontrollers, Raspberry Pi 3 B+ and Arduino Uno are used. For the H-bridge, L298N dual driver module is used. Legato® 111 syringe pump (kd Scientific) and Series 700 Microliter syringe (Hamilton) are used for fluid control. HMS-25BY46L38 stepper motors and an SG90 servo motor are used for the microfluidic platform.

Figure 3. Photograph of the automated microfluidic controlling platform.

The circuitry for this platform is relatively easy compared with other projects (e.g. Aroma Alarm Clock), and is consisted simply with wires and resistors. Except the motors and the racket with microtubes, all the other components of the for microfluidic platform are fabricated using 3D printing. 3D-printed gears are fixed with the stepper motors. Precise control of racket position (~ 0.2mm) is realized by combining the gears with a 3D-printed linear gear that fits on the racket and another linear gear of a subsidiary platform which the racket sits on. Below is a clip demonstrating how the 3D-printed components, the stepper motors, the racket with microtubes, and a microfluidic electrode chip are integrated together, along with x/y position control of the racket.

Software Development

The program structure written inside Raspberry Pi is quite similar to the program structure in another project “Real-time Impedance Detection Systems”. The difference for this project is that PHP is used to directly communicate with peripheral devices (Fig. 1).

Fig. 4 displays the website-based user interface for controlling this platform. The UI is divided in four sections: procedure window, buttons field, racket window, and status window. Briefly, the user can save/load settings from the connected Arduino, add/delete a control command, start/pause/stop the current procedure, download the control procedure to a text file, and append a command after another one.

Figure 4. Website user interface of the platform.

Fig. 5 shows the settings menu. Here, the user needs to find the device url for Arduino and the syringe pump beforehand and insert them. Several controlling preferences, such as motor operation delay time, steps for the stepper motor to move per cell (microtube), precise high/low position of the servo motor, current cell of the racket … etc. can be set.

Figure 5. Settings menu of the UI.

Fig. 6 shows the add command menu. The user can either move the stepper motor to a target cell (microtube), infuse/withdraw fluid at a self-defined rate and target volume, move servo high/low position, or perform a time delay.

Figure 6. The add command menu of the UI.

Concentration Gradient Generation

An automated concentration gradient generation process is written, and is carried out by the automated platform. Here is a clip for demonstration (speed = 10x):

Real-time Impedimetric Detection

The tube doesn’t have to directly connect to a syringe. A detection chip can be inserted between for real-time detection of different solutions (similar to the method illustrated in Fig. 1 of another project Surface Plasmon Resonance Platform). A microfluidic interdigitated electrode chip using microfabrication technique is previously developed (which is a part of my research), and is used for detection of electrochemical impedimetric properties of the fluid. Here, potassium ferricyanide (K3Fe(CN)6) and potassium ferrocyanide (K4Fe(CN)6) are serial diluted using the platform, and real-time impedance detection is carried out using an electrochemical analyzer (CHI614b, CH Instruments) and the microfluidic chip.

Fig. 7 shows the detection result. It can be seen that the solution switching time is relatively fast and stable, and the detection time is consistent, which demonstrates the advantages of this automated platform.

Figure 7. Real-time impedimetric detection plot for different diluted concentrations of K3Fe(CN)6/K4Fe(CN)6.

Summary

In summary, a website-controlled automatic microfluidic controlling platform is designed and fabricated for real-time microfluidic sensing and other applications. Solution manipulation using this platform is stable, repeatable, and time-saving compared with manual operation.

Remote Commandable Self-Driving Toy Car

This project is an integration of computer vision, mechatronics, wireless communication (Bluetooth), database management, mobile app design, sensors and actuators, and is a final project of a course called Design of Automated Systems. I teamed up with two classmates for accomplishing this challenging task of constructing a remote commandable car that can automatically detect and find balls with different colors, then be manually navigated towards a certain location. My contributions to this project are coding the ball-tracking algorithm using OpenCV, utilizing the microcontroller Arduino for movement control, and wiring electric components to the hardware circuit (yellow shaded area in Fig. 1).

System Architecture

Fig. 1 illustrates the system architecture for this project. Raspberry Pi is used as the main computer. Node.js is installed for communicating with the MariaDB database, receiving ordering signals from a remote Android app, and commanding the ball-tracking C++ program.

Figure 1. System architecture of the remote commandable self-driving toy car.

A standard procedure for command and action of the system is as follows.

  1. Android user logins to Node.js server by verification of username and password through the database.
  2. The user sends target ball color to the server.
  3. The server sends command to “face_ball” program by bash.
  4. “face_ball” detects colored ball position by ball recognition algorithm.
  5. “face_ball” sends command to Arduino by serial USB.
  6. Arduino controls two servo motors for the car to move towards the target ball.
  7. After getting close enough, a fence is set to physically trap the ball.

Hardware Design

Fig. 2 shows a 3D drawing of the toy car. Components such as Arduino, Raspberry Pi (RPi) are fixed to the laser-cut acrylic board using plastic columns. Here, three servo motors are used. The first two controls the left/right wheel, and the third controls a trap that can lock the ball at the front of the car.

Figure 2. 3D illustration of the remote commandable toy car.

Figure 3. Preliminary version of the toy car.

Ball-Tracking Program

The main objective of the ball-tracking program is to navigate the toy car towards a target ball, and trap it with a fence connected to the toy car. The flowchart for image recognition of the ball is shown in Fig. 4.

Figure 4. Flowchart for the ball detection algorithm.

Using the above algorithm, rapid detection of the target ball can be realized. Here’s a demonstration of the program recognizing a ball being thrown in the air in real-time.

For movement control of the toy car, the Arduino will either turn left, turn right, or move straight according to the control algorithm shown in Fig. 5.

Figure 5. Flowchart for the ball-trapping algorithm.

[Source code for the ball-tracking program]

Arduino Commands

Fig. 6 shows the commands available for controlling the toy car using Arduino.

Figure 6. Available commands for Arduino.

Here’s a clip for automated control of the toy car. A red ball is defined as the target and is trapped by the car using a fence.

Android App Design

The app is designed using MIT App Inventor, which implements a block-based programming method in which developers can create procedures by dragging predefined blocks together to for performing a certain function. Fig. 7 shows an image containing the complete code for the Android app. The user interface and flowchart for direct control of the toy car is shown in Fig. 8.

Figure 7. Complete block code for the Android app (click for magnified view).

Figure 8. App interface (left) and Arduino control flowchart (right).

Here’s a demonstration for the remote commandable toy car. The car automatically detects a green ball, moves near, and traps it. Then it is manually controlled to avoid obstacles and reach a yellow-colored area at the end.

Aroma Alarm Clock

As an old saying goes, “An hour in the morning is worth two in the evening”. What we encounter and conceive during this period is largely intertwined with our whole day. However, what is the most important thing that we encounter in the morning? It is about waking up. Nowadays, we wake up by the noisy, buzzing sound of an alarm. How would it be like if we could wake up by a scent, a fragrance, wiping away the unsatisfactory memories of the past? What can be changed to fulfill a better awakening?

An aroma alarm clock should fill in the answer. When I was a junior, I enrolled in a program called “NTU Creativity and Entrepreneurship Program” (NTUCEP). In this program, students with similar ideas form groups and learn about entrepreneurship by practice. After a couple days of brainstorming, our group came up with the concept about developing an alarm clock that can wake people up using scent packs. Apart from being awakened by a loud ringing sound of an alarm, fragrances (as we had proposed) have a milder, progressive effect for waking people up. We named our product “Sensio”, a Latin noun of the meaning “thought”.

Market Survey

In order to verify the market demand of this proposal, we devised a questionnaire for investigating the views and habits of people about waking up, which 1320 valid responses were collected. A score of 0 ~ 7 is used to quantify the “discomfort level” for waking up, in which a level of 0 ~ 3 is regarded as painless, and 4 ~ 7 as painful. Fig. 1 shows the discomfort level distribution of all the samples, and Fig. 2 shows the distributions by occupation. It can be seen that regardless of identity, the majority of people feel painful when waking up (students: 71%, office worker: 62%, other: 51%, overall: 68%). Moreover, we analyzed their habits of lying in bed after awakening. Fig. 3 shows the proportion and durations for lying in bed. Accordingly, over 44% of people lie in bed for over 15 minutes, and the average duration is 15.07 minutes. Theoretically, this means if a product could help lower one’s discomfort level of waking up, and reduce the duration of lying in bed after awakening, he could approximately increase 3.8 days/year of effective time. Not only having a favorable morning, but also enhancing his work efficiency.

Figure 1. Discomfort level distribution of waking up.

Figure 2. Number of people vs discomfort level by occupation.

Figure 3. Duration of lying in bed after awakening.

From the above analysis, we became confident about the large market demand for a better quality of getting up. For me, being the leading engineer role in our group of six meant turning ideas into reality. This had been a difficult period but a time of inspiration and excitement. I put into practice what I had learned within the past few years for developing a system, a hardware that can really be used as a product.

Sensio v0.1

I developed two preliminary versions of the hardware, with the 1st version using Arduino as the CPU (Sensio v0.1) (Fig. 4). In this version, an Arduino chip, a time clock module (DS1302), a shift register (74HC595N), relays, button switches, LED displays, a buzzer, a fan and a potentiometer are used for the alarm clock (Fig. 5). 10 different themes are written inside the system and a few simple features are integrated within (Fig. 6 details every mode of the alarm clock and the corresponding function of the buttons). This version serves as the proof-of-concept version for making an aroma alarm clock. Although being simple and somewhat rough, it demonstrated to my team and our professors a solid proof and feasibility of our objective.

[Source code for Sensio v0.1]

Figure 4. Photograph of Sensio v0.1.

Figure 5. Hardware architecture of Sensio v0.1.

Figure 6. Alarm modes and button function for Sensio v0.1.

During this period, we earned 40,000NTD in a fundraising event in our program, and won the Best Potential Award in the 14th NTU innovation competition. Fig. 7 shows the poster we had used in the competition. During that period, we had great enthusiasm and motivation, and dreamed that one day our product would become on-sale and we could establish a real listed company of our own. We entered the NTU garage, which is an incubator for startups that are closely linked to NTU.

Figure 7. Poster of product Sensio for the NTU innovation competition.

Sensio v0.2

The second version Sensio v0.2 was created during my second semester as a junior. This time, an IC chip (ATMEGA328P) is used as the CPU rather than Arduino in order to lower the cost of the system and decrease its volume. Moreover, a PCB layout for fixing electronic parts on a solid panel is used. Fig. 8 illustrates the structural model of Sensio v0.2, Fig. 9 shows a photograph of the integrated electric components and its corresponding PCB layout schematic, and Fig. 10 shows photographs of the laser-cut 3D outer case.

[Source code for Sensio v0.2]

Figure 8. Structural model of Sensio v0.2.

Figure 9. Electronic component-integrated PCB board of Sensio v0.2 and its corresponding layout schematic.

Figure 10. Outer case (blue) and scent pack holder (orange) of Sensio v0.2.

We were thrilled after these accomplishments, since never had a team of the same period achieved such a high level of completeness, especially when concerning the integrity of hardware and software. These remarkable things happen so quickly, so fleeting for us to catch up with. The truth is, we are still so far from success, since we were still students back then. Reality pulled us back, and we started asked ourselves: are we ready to give up some things and dedicate ourselves to entrepreneurship? Our dream product, the aroma alarm clock, had once seemed so close, yet had never seemed so far.

Although in the end, the functional aroma alarm clock wasn’t successfully made, what I’ve learned during this event is unprecedented compared with my past experience, not only techniques for hardware and software development, but also important philosophies for entrepreneurship. This project inspired me to a great extent, and enabled me for creating future hardware-related projects (e.g. Automated Microfluidic Controlling Platform, Real-time Impedance Detection Systems, Surface Plasmon Resonance Platform) in my own field of research.

Figure 11. Photograph of our team and professor.

Electrochemical Impedance Modeling Programs

I started to use a technique called electrochemical impedance spectroscopy (EIS) for biosensing since my undergraduate research. For analyzing EIS data, an equivalent circuit must be constructed for modeling the reaction mechanism. Physical parameters can be extracted by fitting the data using the model. However, for most software, the circuit elements being provided and their corresponding combined circuit couldn’t necessarily meet my needs for finding the physical parameters in a symmetric electrode system. Therefore, I developed a circuit fitting program for customized analysis of impedance data. The fitting and impedance calculation program are used in a first-author journal paper of mine. In the following paragraphs, the development of the fitting program, another program that assists data visualization, and a program for calculating the diffusion impedance of interdigitated array (IDA) electrodes are detailed.

Figure 1. (a) The Nyquist plot for visualizing impedance values, and (b) an equivalent circuit with several circuit elements for modeling electrode surface reactions.

Electrochemical Impedance Circuit Fitting Program

The algorithm for finding all the element parameters in a given circuit is by implementing the Levenberg-Marquardt non-linear fitting method. This is a general and popular method for solving the minimum value of function E(x), where

$$E(x)={1 \over 2} \sum_{i=1}^m [f_i(x)]^2 $$

For impedance data fitting, a complex non-linear least square process (CNLS) is implemented, and the above equation can be specialized as

$$S=\sum_{i=1}^{N_f} [w_{i,\mathrm{Re}}( \mathrm{Re}(Z_{i,cal}) – \mathrm{Re}(Z_{i,exp}) )^2+w_{i,\mathrm{Im}}( \mathrm{Im}(Z_{i,cal}) – \mathrm{Im}(Z_{i,exp}) )^2]$$

where S is the weighted sum of squares of error, Nf is the number of frequencies within an experiment, Zi is the impedance of the i–th frequency, and wi,Re and wi,Im are the statistical weights for the real and imaginary parts of the impedance of the i–th frequency. The subscript exp indicates experimental value and the subscript cal indicates the calculated value while fitting.

For any kind of circuit, Zi,cal can be calculated by the set of element parameters and the frequency (e.g. R for a resistor, C and f for a capacitor), then S can be calculated using its defined equation. By minimizing S, the fitted element parameters can be modified so that the result impedance (Zi,cal) can be as close as possible to the experimental value (Zi,exp). Fig. 2 shows an example for a non-linear curve fitting process.

Figure 2. A non-linear curve fitting process (source: https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm).

A program written in C language is designed using the open-source numerical analysis library ALGLIB® to implement numerical integrations and calculations. ALGLIB is also used for non-linear least squares fitting of EIS data using Levenberg-Marquardt method. Several circuit elements for EIS data fitting are available in this program, which are shown in Table 1. Detailed methods for calculating the IDA diffusion element is shown in my first-author paper.

Table 1. Available circuit elements in the fitting program.

A circuit description code is defined to express equivalent circuits. Elements, including whole blocks inside parentheses, are either in series or parallel with each other. Those inside an odd number of pair of parentheses are in parallel with each other, and those inside an even number of pair of parentheses are in series. Fig. 3 shows a circuit equivalent to the description code of “R(RC)(R(RC))”.

Figure 3. Circuit description code and its corresponding circuit. Equivalent parts are marked by identical colored blocks.

Figure 4. Snapshot of the equivalent circuit fitting program.

[Download equivalent circuit fitting program (Runs on Windows 64-bit environment)]

[Source code for the equivalent circuit fitting program]

Real-time Impedance Plotting Program

It is reasonable that an element in a circuit contributes to the impedance change to a certain degree. For instance, a resistor has an influence on the real part of impedance, and a capacitor affects its imaginary part. However, when the circuit consists of elements with serial or parallel combinations, it can be quite difficult to imagine how the shape of impedance data would change according to each element.

Therefore, I wrote a program for plotting impedance data in real-time using Processing language. After entering the circuit description code into the program, a Nyquist plot, total impedance Bode plot, and phase angle bode plot is generated. The user can use horizontal bars to control the value of a specific element. By changing its value, the impedance would be calculated immediately, and the plot would be drawn out in real-time.

Figure 5. User interface for the real-time impedance plotting program.

Here’s a clip for demonstrating the real-time impedance plotting program.

[Source code for real-time impedance plotting program]

IDA Diffusion Impedance Calculation Program

The IDA diffusion element is a novel element for parameterizing the diffusion impedance of an IDA electrode using its shape factor (we/w), magnitude of the admittance (Y0), and the dimensionless frequency (w2ω/D). However, complex functions are required for calculating its value, such as the Bessel function, the complete elliptic integral of the second kind, and definite integrals.

Due to calculation difficulties, a program is written for calculating the IDA diffusion impedance. The program consists of two files “settings.txt” and “IDA_diff_Z.exe”. “settings.txt” is of the format below:

These contain all the 12 parameters for the usage of this program. A detailed description is listed in order below:

  1. If [output to txt file] is 0, then calculated impedances will be shown directly on screen. If it is 1, then the values will be saved to the file “Z_diffusion_IDA.txt”.
  2. The maximum frequency for impedance calculation can be set after [max freq (Hz)].
  3. The minimum frequency for impedance calculation can be set after [min freq (Hz)].
  4. The number of frequency points within a decade can be set (e.g. If [points per decade] is set to 5, then frequencies calculated between 101 and 100Hz will be 101, 100.8, 100.6, 100.4, 100.2 and 100Hz.)
  5. we can be set after [electrode bandwidth we (um)].
  6. wg can be set after [gap width wg (um)].
  7. l can be set after [electrode length l (mm)].
  8. N can be set after [number of bands N].
  9. D can be set after [diffusion coefficient D (m^2/s)].
  10. C* can be set after [bulk concentration C* (mM)].
  11. n can be set after [number of electrons n].
  12. T can be set after [temperature T (K)].

After setting the 12 parameters, put “settings.txt” and “IDA_diff_Z.exe” in the same directory and open “IDA_diff_Z.exe”. Impedances will be automatically calculated and printed in the defined format (Fig. 6). ※The program can only be run in a Windows 64-bit environment.

Figure 6. An example output of the IDA diffusion impedance calculation program.

[Download IDA diffusion impedance calculation program]

Minesweeper AI

I got addicted to Minesweeper the first few days when I started playing this awesome puzzle game. Solving the game faster every few times gives me great satisfaction of self-fulfillment. However, for the “expert” board setting, where 99 mines are hidden within a 16×30 square grid, I had never obtained a score lower than 100 seconds. Curious of what the fastest solving rate is the one can achieve, I designed a Minesweeper AI program that can automatically play the Windows Minesweeper game.

C language is used for programming, and the algorithm flowchart is displayed in Fig. 1.

Figure 1. Algorithm flowchart of the Minesweeper AI program.

Here breadth-first-search (BFS) is used to search for uncovered squares on the grid, and a queue is used for saving uncovered squares to be analyzed. A game-playing optimized algorithm is written inside the program, and OpenCV is used to take a snapshot of the Minesweeper window region and process the image for subsequent calculations. Mouse movement and click actions are realized by including the windows.h header.

The usage of the program is relatively simple. The user should only execute the Minesweeper game program, and modify the game setting beforehand. After opening the AI .exe file, the program will automatically locate the Minesweeper game, calculate the dimensions, then start playing. If a mine is accidentally clicked, the program will continue to play the next game until a fully uncovered board is achieved.

Here is a clip demonstrating the Minesweeper AI playing an expert level game and winning in 7 seconds. It failed on the first try almost at the end, but succeeded on the second try.

[Download program for the Minesweeper AI (Can only run on a windows 64-bit OS)]

Robot Arm Control

It’s easy for us to point at a certain coordinate in space. That’s mainly because we simply locate the point with our eyes, and continuously check if our finger is pointing at that very spot. It surely will be more difficult without using eyes, and this is the case for robot arm control with no image feedback.

Think of a two arm robot (Fig. 1). We usually want to reach a certain point on the x-y plane. The problem is only the angle of the joints can be controlled. How can we correlate the joint angles of a robot with its tip coordinate? Things get harder when it comes to 3D space, and even harder considering its rotation.

In this project, I created a program that can calculate the every joint angle of the 6-arm robot IRB140 for positioning it at a given (x, y, z) coordinate and rotation.

Figure 1. Dimensions of the IRB140 robot (unit: mm) [1].

The problem for reversing an operation from the specified coordinate and rotation to every rotation angle of an arm joint lies in the field of inverse manipulator kinematics. There may be multiple solutions that lead to the same result. Thus, I implemented the Pieper’s solution [2] for solving the joint angles for the IRB140 robot.

Here’s a video demonstration for precision control of the IRB140 by only giving the joint angles as the input. The robot follows a trail surrounding a paper box with the tip of the last arm always pointing at the center of the box.

[Source code for robot arm control program]

1. ABB, IRB140 product specification, 2019, https://library.e.abb.com/public/2893a5756d204e19aba0d37c2a2cadc6/3HAC041346%20PS%20IRB%20140-en.pdf
2. Craig, J.J., Introduction to Robotics: Mechanics & Control. 1986: Addison-Wesley Publishing Company.

Guess the Number AI

After completing the first two projects of the Guess the Number series (Guess the Number (Windows) and Guess the Number (iOS)), I made an AI that can play this game at a high-human level.

It has been proven that at most 7 turns are needed to guess the answer, with a best average game length of 5.21 turns. For this game, all the possible combinations (e.g. “0123”, “7381” …) can be saved into a 1D array. After each guess, the possible combinations for the answer will be reduced. Therefore, the algorithm of the program is written for finding a number that will minimize the maximum possible combinations left. The time complexity for each turn is O(n3), and an average of 5 turns of guessing if needed for an arbitrarily chosen number.

For using the program, the user must first choose a 4-digit answer (e.g. “0123”), and input the two numbers [A] and [B] according to the game rules and the numbers guessed by the program. For instance, if the answer is “1357”, and the AI guesses “3127”, the user must input 1 2 ([A] = 1, [B] = 2).

Here’s a demonstration of the AI program guessing the answer “8192” in 5 guesses:

[Download Guess the Number AI program]

[Source code for the Guess the Number AI program]

Nonogram AI

Nonograms (also known as Picross, Griddlers, Pic-a-Pix, and various other names) are picture logic puzzles, in which the cells in the grid must be colored or left blank according to the numbers on the side of the grid to display hidden pictures.

Figure 1. The initial and complete state of a 25×25 grid Nonogram.
(Source: https://www.nonograms.org/nonograms/i/32344 )

For a classical type of game, the numbers are a form of discrete tomography that measures how many unbroken lines of filled-in squares there are in any given row or column. For example, a clue of “1 2 3” would mean there are sets of one, two, and three filled squares, in that order, with at least one blank square between successive sets.

Solving Nonograms can be very time-consuming, and can be tremendously brain-twisting as the grid size increases, thus I created an AI program for automatically solving these puzzles.

The program utilizes the depth-first search algorithm that runs recursively from the top to bottom row. For the jth column of the ith row, the black or blank spaces must satisfy the tomographic rules formed by the numbers of the current row and column. This is a relatively simple approach for calculating the answer. However, because the algorithm does not solve the game using a more intuitive method for solving interrelated constraints, it will use a lot of time for big grids (over 40×40).

For the input format of this program, two numbers indicating the number of rows and columns are given first (n and m). Afterwards, there are n rows of data, with each row k starting with a number nk indicating how many numbers are given in that row, and followed by nk numbers representing the discrete tomography of black spaces of that very row; then there are m rows of data, and also starting with a number mk for the kth row, and followed by mk numbers representing the discrete tomography of black spaces for the kth column.

Here’s an example input for the Nonogram in Fig. 1:

25 25
1 8
2 7 3
1 16
2 11 4
2 13 2
2 14 2
1 18
2 8 4
2 6 4
2 5 5
3 4 2 2
3 4 3 1
3 3 2 1
2 3 2
2 3 2
3 2 1 4
3 2 1 4
3 2 1 4
3 3 1 4
2 5 4
1 11
1 10
1 5
1 5
1 6
1 1
1 2
1 2
1 4
1 11
1 13
1 15
2 9 3
2 8 3
3 7 5 2
3 7 3 4
3 6 2 3
3 8 3 2
2 13 2
2 10 2
3 5 5 3
4 4 1 4 6
4 1 2 2 8
3 3 1 10
4 3 1 4 4
4 2 1 2 3
3 2 1 2
2 2 1
2 2 1
1 1

A demonstration for using the AI program for solving a 39×50 Nonogram in about 22 seconds is shown below. The result displays an owl sitting on a branch (Fig. 2).

Figure 2. The complete state of a 39×50 Nonogram displaying an owl on a branch.

[Download program for the Nonogram solver AI]

[Source code for the Nonogram solver AI]