A post after many days! This one describes a simple code I wrote to simulate a Deterministic Turing Machine using TensorFlow. Its pretty basic with respect to the design, using TensorFlow only for the state change computations. The movement on the tape, and handling of halts is taken care of by old-school Python. This is *not* intended to be a practical usage of TensorFlow, its just something I did and am writing about here :-).

To demonstrate the code, I will use the 3-state Busy Beaver example from the Wiki page.

**Encoding the Problem**

Each of the states are encoded as real numbers (integers in this case, but for whatever reasons you could use floats too). So ‘A’, ‘B’, ‘C’ and ‘HALT’ become 0, 1, 2 and 3 respectively. Accordingly, the *initial and final states* are defined as follows:

initial_state = 0 final_states = [3]

Since there can be multiple final states generally, `final_states`

is defined as a list. As with the states, the potential symbols are numbers too. The *blank symbol*, as per the Busy Beaver problem definition, is 0.

blank_input = 0

Now we come to the *transition function*. It is defined using two 2-D NumPy matrices, one for the inputs and the other for the corresponding outputs.

import numpy as np transition_input = np.array([[0, 0], [0, 1], [1, 0], [1, 1], [2, 0], [2, 1]]) transition_output = np.array([[1, 1, 1], [2, 1, -1], [0, 1, -1], [1, 1, 1], [1, 1, -1], [3, 1, 1]])

Consider the `transition_input[2]`

and `transition_output[2]`

. The input denotes *[current state, current symbol]*. The output denotes *[next state, next symbol, movement]*. The *movement *value can either be 1(move pointer right, or increment index value) or -1(move pointer left, or decrement index value). Accordingly, `transition_input[2]`

requires the current state to be 1 and current symbol being read to be 0. If these conditions are met, then the next state and next symbol are 0 and 1 respectively, with the pointer on the tape moving one step to the left.

**The Turing Machine Code**

The actual Turing Machine code follows. Its well documented as always, so you shouldn’t have a problem following whats going on if you are well-acquainted with Turing Machines and TensorFlow.

import tensorflow as tf class TuringMachine(object): """ Implementation of a basic Turing Machine using TensorFlow. """ def __init__(self, initial_state, final_states, blank_input, transition_input, transition_output): """ Initializes the TensorFlow Graph required for running the Turing Machine. States and symbols should both be encoded as real numbers prior to initialization. Accordingly, 'initial_state' should be a float denoting the starting state. 'final_states' should be an iterable of numbers, denoting all the states the Machine can halt at. 'blank_input' should be a float denoting the blank input for the Machine. 'transition_input' and 'transition_output' will together denote the transition function followed by the Machine. Both should be 2-D NumPy matrices. Each inner vector in 'transition_input' will contain [current state, current symbol], while each inner vector in 'transition_output' will have [next state, next symbol, move]. "move" can either be 1: to increase pointer index by 1 (move right), or -1: decrease pointer index by 1 (move left). """ #Store the necessary information as attributes self._initial_state = float(initial_state) self._final_states = set(final_states) self._blank_input = blank_input #Initialize Graph self._graph = tf.Graph() n_transitions = len(transition_input) if len(transition_input) != len(transition_output): raise ValueError("Number of transition inputs and outputs " + "not same") #Initialize all components in the Graph with self._graph.as_default(): #Placeholder for current symbol thats being read from the #tape self._current_symbol = tf.placeholder("float") #Variable to hold current state self._current_state = tf.Variable(self._initial_state) #Constant Ops for transitions transition_in = tf.constant(transition_input) transition_out = tf.constant(transition_output) #Generate Ops to compute transition concat_in = tf.pack([self._current_state, self._current_symbol]) packed_in = tf.pack([concat_in for i in range(len(transition_input))]) transition_diff = tf.reduce_sum( tf.pow(tf.sub(packed_in, transition_input), 2), 1) #This will return a 0 if theres an accurate match for the #transition condition (current state, current symbol). If not, #this will return a non-zero value. self._transition_match = tf.reduce_min(transition_diff, 0) #This will correspond to the index of the match match_index = tf.cast(tf.argmin(transition_diff, 0), "int32") #Pick out the transition output self._transition = tf.reshape( tf.slice(transition_out, tf.pack([match_index, 0]), tf.pack([1, 3])), [3]) #This will change state Variable self._state_change = tf.assign( self._current_state, tf.reshape( tf.cast(tf.slice(self._transition, tf.constant([0]), tf.constant([1])), "float32"), [])) #This will reset the state Variable self._state_reset = tf.assign(self._current_state, self._initial_state) #Initialize Session self._sess = tf.Session() #Initialize Variables init_op = tf.initialize_all_variables() self._sess.run(init_op) def run(self, input_tape, start_index, max_iter=1000): """ Runs the Turing Machine with the given input. 'input_tape' must be an index-able object, with 'start_index' denoting the index number to start computation at. Obviously, beware of inputs that may cause the machine to run indefinitely. To control this, set 'max_iter' to some number of iterations you would want to allow. Will return final state and final index. If Machine halts at a non-final state, or goes beyond max_iter, i.e. the Machine does not accept the input, will return None, None. The input tape will be modified in-place. """ #First reset the state self._sess.run(self._state_reset) #Run the Machine index = start_index for i in range(max_iter): #Figure out input to be given if (index > -1 and len(input_tape) > index): machine_input = input_tape[index] else: machine_input = self._blank_input #Run one step of the Machine _, match_value, transition = self._sess.run( [self._state_change, self._transition_match, self._transition], feed_dict = {self._current_symbol: machine_input}) #If match_value is not zero, return None, None if match_value != 0: return None, None #First modify tape contents at current index as per transition if (index > -1 and len(input_tape) > index): #Index is within input tape limits input_tape[index] = transition[1] elif index == len(input_tape): #Index has gone beyond the range of the input tape, #to the right input_tape.append(transition[1]) else: #Index has gone beyond the range of the input tape, #to the left input_tape.insert(0, transition[1]) index = 0 #Change index as per transition index = index + transition[2] #If its a final state, return state, index if transition[0] in self._final_states: return transition[0], index return None, None

A few things:

1. If the tape ‘pointer’ goes beyond the limits of `input_tape`

, the blank input is automatically given to the machine (lines 114-118). Moreover, depending on which end of the tape the pointer has gone across, the output value from the machine is inserted into the list (lines 130-142).

2. Since there is no way to check if the Machine will stop, the `run`

method lets the user provide a maximum number of iterations. If the machine keeps running beyond the limit, `(None, None)`

is returned.

3. The `input_tape`

list is modified in-place.

4. If you want to track the transitions the Turing Machine follows, add a ‘print’ statement (or some kind of logging) at line 125.

**Running the Turing Machine**

First we define the contents of the input tape, and the index to start at.

>>> input_tape = [0, 1, 1, 1, 1, 1] >>> start_index = 0

Then, we initialize a `TuringMachine`

instance with the algorithm as encoded above:

>>> machine = TuringMachine(initial_state, final_states, blank_input, transition_input, transition_output)

Finally, we ‘run’ the machine, and check if the input tape has been modified correctly:

>>> machine.run(input_tape, start_index) (3, 5) >>> input_tape [1, 1, 1, 1, 1, 1, 1] >>> machine.run(input_tape, start_index) (3, 9) >>> input_tape [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]

Thats it! This is my first implementation of a Turing Machine, so do leave a comment if there are any errors/improvements possible. Cheers!

## One thought on “Simple Turing Machine using TensorFlow”