|Existence of numbers
||Numbers do not exist in the computer. We
make a mental mapping between the physics of the computer
(voltage levels, etc.) and the mathematics in our head.
|Number Systems. Ways to represent numbers
||Why the decimal system (base 10) is nothing
special. Base-6 is better, but the numbers get large.
Base-20 is often used (Mayans) as well as Babylonian
(base-60). Romans were strange. Binary is great for
computing because the underlying electronics are binary.
||How to convert numbers from any number base
to any other number base.
||How are negative numbers stored in the
computer? We will look here at three conventions:
Sign-magnitude, 1s-complement and 2's-complement. We will
see that 2's-complement is good because subtractions can
then be done by adding the negative numbers.
||The underlying mathematics of a computer.
Boolean Algebra, that itself is based on Set Theory. The
Huntington Postulates are presented that will form our
basis. Some important relations are given.
|de Morgan's Laws
||The de Morgan's Law tell us that the
negation of a 'sum' (OR) is the 'product' (AND) of
negations, NOS=PON and the negation of a 'product' (AND)
is the 'sum' (OR) of negations, NOP=SON.
These rules will come in very handy in the future when we design digital circuits.
||Truth Tables are a way to summarize the
behavior of a logic circuit. They specify the output value
(0 or 1) for all combinations of input values at the gate.
In total there are 2^4 = 32 possible two-input-one-output
gates. The most basic ones are: AND, OR, NAND, NOR, NOT
|Sum of Products
Product of Sums
|How to implement any logic function with
only AND, OR and NOT operations.
||Karnaugh Maps are an easy way to simplify
the expressions found by the SOP (sum of products) and POS
(product of sums) methods
|Karnaugh Maps 2
||An additional look at Karnaugh Maps. How
does the XOR operation look in a KM. And how do we deal
with 'don't cares'?
||How are the basic gates implemented in
electronics? We take a look at the NOT, NAND, NOR, AND, OR
and XOR gates implemented in CMOS (complementary metal
oxide semiconductor) technology.
|Multiplexer||A real life example how to build a useful digital electronic circuit on basis of a functional description. A multiplexer (one of the inputs is copied to the output, determined by the selector lines)|
||We will take a look at what can go wrong in
a sum-of-products solution of a logic function. While the
initial output value is 1 and the final steady-state
output value is also 1, temporarily the output can be 0.
This is a so-called static-1 0-glitch, and if it can occur
a static-1 0-hazard. We will also see how to improve the
circuit to eliminate such hazards.
|Memory (latch & flipflop)
||When feedback is used in logic circuits, the emergent property can be that the circuit has memory; the output no longer only depends on the input, but also on the current state.|
||A master-slave flip-flop is made of two
gated S/R latched placed in series. At the first halve of
a clock cycle the master is processing the signal, while
at the second halve the slave is processing.
|Master/Slave Flip-flops 2
||A close look at an edge-triggered
master-slave D-type flip-flop.
|Finite State Machines
||A finite state machine has both a logic
array and memory. Four types exist: 1) Mealy Machines
(input, output and memory), 2) Moore Machines (sequencers)
with output and memory but no input, 3) Boole Machines
with input and output but no memory, and 4) Berkeley
Machines with input and memory but no output. An example
is given of a Moore Machine sequencer, a 3-bit counter
based on T-type flipflop memory elements.
|Finite State Machines 2. S/R counter
||Another look at a Moore Machine (Finite
State Machine with no input). In this case an example of
how to build a 2-bit counter with S/R flipflops
|FSM Mealy machine. Parity checker
||We will take a look at a full Mealy machine
(with input and output), namely a parity checker. First we
design a state diagram: a circle for all states and draw
arrows how the states can change with input. Then we
decide for our hardware (how many and what type of
flipflops to use for the state and output). And then we
can implement the things just as we had done for a
(non-input) Moore machine. An action table. An excitation
table. The logic array via Karnaugh maps, etc.
|FSM, Edge Detector
||Here we take a look at another full Finite
State Machine (Mealy Machine) with input and output and
memory, namely an edge detector.
|Road Map summary
||We take a look at how computer architecture
is placed in the tree of knowledge
|Digital Systems Summary 1/3
||A summary of the block of lectures above
|Digital Systems Summary 2/3||A summary of the block of lectures above
|Digital Systems Summary 3/3||A summary of the block of lectures above
||Half adders and full adders as the
cornerstone of computer calculations
||Ripple-carry adder. How to make an n-bit adder from full adders. And how to turn an adder into a subtractor|
||To improve speed of calculation, instead of 32 full adders in a ripple-carry adder, we will use 8 carry-look-ahead adders of 4 bits|
||How a left/right 1-bit shifter is made|
||A multiplexer (MUX) and demultiplexer (DEMUX)|
|(DE)CODER||A coder and a decoder|
||A 4-bit comparator: A=B, A<B and A>B|
|Arithmetic and Logic Unit (ALU)
||A look at the Arithmetic and Logic Unit (ALU)|
|Central Processing Unit (CPU)
||A close look at a Central Processing Unit (CPU)|
||How does the control logic work inside a CPU|
|How to go from addition hardware to a multiplication algorithm. The Russian-peasant algorithm|
||A shift-compare-subtract algorithm to do
divisions with simple hardware
||Some advanced multiplication techniques.
Karatsuba algorithm (dividing large 2m bit numbers into 2
m bit numbers and doing the multiplication with them) and
the look-up-table algorithm, in fact done with only a
table of squares
||How a basic memory (RAM) chip works
|Combining memory chips
||How to join memory chips to have larger
||The philosophical/physical aspects of
information. About the amount of information when
receiving a message and the entropy of a message before
|Physical memory organization
||The physical organization of memory
|Memory (Software aspects 1/4)
||How is the memory in a computer organized
||Endianness (Little endian vs. Big endian)
|Memory (Software aspects 2/4)
||More logic aspects of memory organization
are presented. Heap vs. stack. Recursivity. Garbage
collection. Global pointer, frame pointer.
|Memory (Software aspects 3/4)
||More aspects of memory organization.
Paging. Virtual memory. Memory management unit (MMU).
Assembly addressing methods
|Memory (Software aspects 4/4). Overlays
||How ROM (read-only memory) is made and it
is compared to static and dynamic RAM
|BIOS and bootstrapping
||About the BIOS and how it can bootstrap the
computer into action
|Buses and motherboards
||How does the CPU communicate with other
components? Here we will see the hierarchy of
communication buses. We'll also take a look where all the
components are on a Motherboard
||We will see here how in communication
bit-errors can be avoided by hardware and software.
hardware: twisted pair. Software: parity checking and
forward-error correction (FEC) using Hamming 7.4 coding
|Floating point IEEE 754 (1/4)
||How do we represent real numbers on a
computer? In comparison to integer numbers.
|Floating point IEEE 754 (2/4)
||With a limited number of digits, what are
the ranges and precision of the numbers?
|Floating point IEEE 754 (3/4)
||The IEEE 754 standard. How floating point
numbers are represented in 32 bit (single) and 64 bit
(double). Normalized and denormalized numbers. Fractions
and significands. Exponent, excess-127
|Floating point (4/4). Divisions
||How multiplications and divisions are done
on a computer. Divisions consists of finding the
reciprocal of the denominator
|Floating point: Newton Raphson method
||How to implement other mathematical
Here we will see how to use the method of Newton-Raphson to find the square root of an argument.
||A list of items to take into consideration
when analyzing a computer architecture
||A close look at how the Difference Engine
of Charles Babbage looked (beginning of 19th century)
||A short summary of the Intel 4004
processor. Bad sound!!!
|Intel 4004 (addendum)
||Some extra features of the Intel 4004
|Intel 4004 (1/2)
||A short summary of the Intel 4004 processor
|MOS 65xx (1/2)
||It was used in the first Apple computers
and the famous Commodore C=64, but how does the MOS 65xx
family of CPUs work?
|MOS 65xx (2/2)
||How a 65xx Assembly instruction is
constructed and translated into machine code
|VICE C=64 emulator and Merlin Assembler
||A C=64 emulator running an assembler to write
programs in 6510 Assembly (Merlin)
||A look at the Intel x86 family of
||A look at the Intel x86 family of
|AVR Atmel (Arduino)
||A close look at the AVR Atmel used in for
instance the Arduino platform
||How to write AVR Assembly and upload it to an
AVR processor in an Arduino board. We will use the avra assembler
to translate our code and the avrdude program to upload it to our
|x86 Assembly Hello World (1/3): NASM
||How to use the NASM compiler to write an
x86 Assembly program to print "Hello World" using system
calls and interrupts (0x80)
|x86 Hello World (2/3): AS
||How to use the AS compiler to write an x86
Assembly program to print "Hello World!" using interrupts
|x86 Assembly: Hello World (3/3): printf
||How to use libraries from other languages
in x86 Assembly. More specifically, how to use printf to
output strings and integers
|x86 in-line Assembly in C
||How to use in-line x86 Assembly in C code
||What is quantum computing and how does it work?
|Asynchronous computing. The brain
||The difference between synchronous and asynchronous computing.
An example how the brain works and comparing it to the state-of-the-art
Summit computer. Can it pass a Turing test?
|MIPS Assembly intro||A presentation of the Assembly programming
language as a bridge between computer-understandable
machine language, like
and higher-level programming languages like C and Pascal,
for (i=0; i<10; i++)
|How to install the MARS environment to
develop MIPS Assembly programs
||A data flow diagram of the MIPS
architecture to have a mental picture when we are
||We take a look at the 32 registers (each 32
||A detailed look at a simple MIPS Assembly
program. "Hello World"
||A look at the instruction set of MIPS and
an example of translating an Assembly instruction
add $t2, $t1, $t0
to machine language (binary or hexadecimal)
|MIPS Instruction Set||A look at the instruction set. Being a RISC architecture, the range of available instructions is limited, and we take a brief look at all of them|
|MIPS System calls||System calls are interrupts, temporarily transferring execution to the operating system to perform a task and then continue with the rest of the program. For instance input/output. Here it is described how it is done in MIPS. Moreover, we take a look at pseudo instructions like LA (load address) and LI (load immediate)|
||How the C structure if-then-else is
implemented in (MIPS) Assembly
|Mask & shift
||How masking works (AND, OR, XOR) and how we have logic and arithmetic versions of shifting|
||How to implement a for-loop in MIPS
||What MIPS instructions do we have available
to access main memory (outside CPU). As we will see
variables do not exist. It is all pointers! As such, we
can load a character from memory and add 1 to it, as if it
were an integer, and then go back treating it as a char
|Structures and arrays
||How structures and arrays are implemented
|Functions (1/3) (jal, jr, $a, $v)||We convert a for-loop into a while-loop and then a construction with if-goto. Then we place the instructions inside the loop in a separate piece of code, a function. To increase flexibility, we pass the information to the function in $a registers and return values in $v registers. Moreover, to be able to call the function anywhere in the code, we need the jal (jump-and-link) instruction and return from the function by jr (jump register)|
|How to use the stack to save and retrieve register values when using function calls. Push and pop
||In the previous functions we learned how to
use the combination of 'jal' and 'jr $ra' to implement
functions. Then we learned how to take care of the
registers by saving them on the stack. Now we look what
happens if we have a function calling another function. We
will see that also $ra has to be saved on the stack.
||How to implement in MIPS recursive
functions, functions that call themselves
|Floating point (1/2)
||What floating-point instructions do we have
available in MIPS Assembly?
|Floating point (2/2)
||An example of a MIPS Assembly program using
floating point calculation