Date: 02-03-2017
Authors: Noah

The FHNW IME has a nice Linux infrastructure. Since it makes all the necessairy environments to work with HDL and associated tools very easy we are going to use this environment instead of setting up our own. To remotely access the systems, SSH and VNC can be used which works awesomely well!

To access the IME servers via SSH, one has to be in the FHNW networks (WLAN/LAN/VPN) and execute the following set of commands:

$ ssh <user>@wi18as33032.adm.ds.fhnw.ch

$

with and being your FHNW user credentials.

Now a new VNC session can be started. Always reuse existing sessions or close them if not needed anymore to not spam the IME servers. Existing sessions and general options and howtos can be listed with the vncserver alias:

<user>@ime_servers $ vncserver

The help outputs are very nice and should be self explanatory, but a new session can be opened with the command

vncserver -new -geometry <width>x<height>

This command will tell you a which has to be remembered. The vncserver command always lists ports for open sessions. In your VNC client the IME servername and port have to be entered. As a VNC viewer TigerVNC can be used comfortably and is available and working for all OSes. The first time you open a VNC session you will be prompted to choose a . This is your password when you login to VNC. Not your regular FHNW password!

wi18as33032.adm.ds.fhnw.ch:<port>
<vnc password>

Have fun with tools!


Date: 08-03-2017
Authors: Raphael & Noah

We started working throgh the code and simulink models of the previous group. This has worked out okay up until the point where the final filter steals 10% of the amplitude in the simulation.

General tooling:

  • msh 2016_HS_P5_P6_FESP fpga to start a shell with all tools and licenses enabled
  • sysgen to start matlab/simulink with the Xilinx systems generator enabled *

General filestructure:

  • design_sw/Simulation/Simulink_FPGA/Filterkette/ for all the FPGA simulations and generator simulink models
    • Filterkette.m to launch sim and create necessary vars
    • Filterketten.mdl to see simulink model where signals can be seen with a scope
  • design_sw/Simulation/Simulink_Filterdesign/ all the filter simulations
    • Dezimation_V2_m.m to launch general filter simulation with parts convoluted back

Simulation of the Filterketten works well but on the last filter a good 10% of amplitude is lost:

Checking the frequency response of the H51 filter (last stage) we see that we lose 1dB => 0.89:

The above picture depicts the frequency response of the H51 filter. However, the plot is not normalized to the sampling frequency, but is instead plotted up to half the system frequency, which amounts to 62.5 MHz. Since the filter chain before the H51 filter has a decimation ratio of 125 in this example (with 625 being the global decimation ratio, and H51 being responsible for one fifth of that), we should scale the horizontal axis accordingly to a maximum value of 500 kHz (125 MHz / 125 / 2). But because we are lazy, we will instead just scale our values as needed so that we can read the correct values off the incorrectly scaled plot.

The input signal has a frequency of 1 kHz, which is 1/500th of 500 kHz. So, on the plot which is scaled to 62.5 MHz, the value corresponding to our input frequency should be at 1/500th of 62.5 MHz, which comes to 125 kHz. The below picture is the small segment of the above plot around that frequency.

As can be seen, the frequency response has an attenuation of 1 dB at this frequency, which is quite accurately 0.89. We therefore conclude that this is the reason why the H51 filter causes roughly 10% signal amplitude loss at this frequency in our simulation.

The red_pitaya_top.sv file resides in design_sw/RedPitaya-master/fpga/prj/v0.94/project/redpitaya.srcs/sources_1/imports/fpga/prj/v0.94/rtl. Completely broken. Note to self: Check out red pitaya project from scratch and implement own stuff!

After generating the bitstream everything is in the Directory design_sw/RedPitaya-master/fpga/prj/v0.94/project/redpitaya.runs/impl_1

ssh root@10.84.130.54 to log onto the Red Pitaya rw to mount read/write. scp to /opt/redpitaya/fpga cat of the bitstream to /dev/xdevcfg


Date: 20-03-2017
Authors: Noah

Today I wrote the disposition for the report. It should be a good base on how to structure the report in it's core.

Furthermore I started crawling Red Pitaya docs. It is really hard to find some good info or real docs. All there is is source code on github which is decently commented but that's all. I would really appreciate some guide or tutorial on how to interface with the FPGA properly.

The upside of the research is that all code is open source and the devs seem to be very active on github and responsive to issues opened on said platform.

There is gereral information on the board on the Red Pitaya Wiki which seems to contain nothing but that one page. The article seems entirely outdated and lacking information. The same goes for a forums post I found that describes partly how to interface the data path. It seems outdated. Same goes for this article that seems outdated but still contains valid information that could be of use.

Then there is some more modern readthedocs from the guys working on Red Pitaya at the moment. It seems informative but still lacks some key points.

I have opened an issue on github to get more information on the state of the framework from the devs. There appears to be a major revamp of the entire structure. The new mercury branch appears to be the future but is apparently still unstable. I personally would rather work with partly unstable and fixable code than unmaintained old code that can't be fixed anyways wich seems to be the case for the master branch.

Let's hope the dev that is most busy working on the Red Pitaya project atm responds again soon and we get some more answers. Otherwise this is not gonna be fun I fear.

What's more is that we probably do not get around learning a bit of Verilog since the entire codebase is written in it. This should be doable using analogies to VHDL.

The brightest spot today was when I learned that the Red Pitaya team uses Jupyter to do testing and interfacing the Red Pitaya. Awesome!


Date: 21-03-2017
Authors: Noah

Today I found out about an alternative SDK called Koheron. It seems promising and also has examples to build upon. It has to be evaluated to estimate if it is worth a shot!

I also dug into filter design with Python, just for fun. I found this gem which appears to be an implementation of CIC filters in myHDL, a Python HDL framework which transpiles to VHDL/Verilog. This article shows with numbers at the end how much cells a CIC filter uses for what order. They affirm what was suspected: CIC filters are awesome!

I also dug up the Xilinx CIC Compiler. It doesn't support filter design as far as I understand but it supports implementing t optimally tailored to the Xilinx hardware used with the ability to explicitely specify what elements on the FPGA to use if there is choice.

I also started reading on CIC filters today and they seem to be the real deal. They can decimate very effectively with just a few blocks on the FPGA. Also, Matlab seems to have some CIC filter blocks. How they are to be used is still a mystery to be solved ... Thanks usefull Matlab docs \s.


Date: 22-03-2017
Authors: Raphael & Noah

We continued working with the previous work, failing horribly. Generating bit streams from the previous project or even the stock Pitaya FPGA code currently results in weird behavior. Specifically, we can access and configure the Pitaya via telnet and SCPI:

telnet 10.84.130.54 443
Trying 10.84.130.54...
Connected to 10.84.130.54.
Escape character is '^]'.
1
4
1
1
8191
8191
0.000000
3
ACQ:SRA:HZ?
Error: 3
ACQ:DEC?
Error: 3
ACQ:TRIG:MODE2
2
^]
telnet>
Connection closed.

However, some commands which should exist (e.g. ACQ:SRA:HZ? or ACQ:DEC?) according to this piece of documentation (found here), produce errors, as can be seen above. Others, however, work fine (e.g. ACQ:SRAT?). A good list of commands which are verified to work can be found in the Java source code of Mr. Gut's Spectrum Analyzer software.

Despite some commands seemingly working (for example, selecting a sampling rate via SCPI), they seem to have no effect. A 3 MHz sine wave sampled at 122 kHz should be filtered out, but does in fact pass unhindered through the device as far as we can tell. Enabling and disabling an LED via SCPI does not work either. However, we did manage to achieve this by going directly through the Linux system on the Pitaya (which auis obviously not the objective, but we were desprate):

echo 1 > /sys/class/leds/led8/brightness
echo 0 > /sys/class/leds/led8/brightness
echo heartbeat > /sys/class/leds/led8/trigger
echo none > /sys/class/leds/led8/trigger

We are not sure at the moment where exactly the fault lies. The official Pitaya project is currently going through a major code rewrite. The old codebase, which was used as the foundation for this project so far, has basically been abandoned. The new codebase, according to the developers themselves, is as of yet unstable. Documentation is, to say the least, incomplete. For the documentation which does exist, it is often not clear whether it is based on the old or the new codebase.

We found some interesting blogs by Pavel Demin by Anton Potocnik which seem to bring a little structure into the chaos.

We decided to follow that lead and inform Prof. Gut what the problems are atm. since we didn't expect it to be such a pain, considering the Red Pitaya project is well known ...

Raphael for now is trying to build the Ubuntu from the mercury branch. This will require some additional build tools which are currently not present on the IME computers. IT support has been contacted for setting up the necessary tools.

Noah is first trying to run the LED blink project from Anton and as soon as that works out he is going to try and build a fresh RedPitaya Vivado Project.

The LED blink project was successfully built and flashed. The LED flashes; YAY! That is way better than we were able to achieve with the original core!

Since the previous work bases on the legacy code, we are uncertain if it really ever worked and if this is a workable solution going forward . We are trying to build the software from scratch together with Peter Schlachter (he needs to provide the proper packages in the IME system) and see how this pans out (mercury branch is 150 commits ahead of master and only 4 behind!!!!).

We discussed with Prof. Gut that we are free to implement the FPGA core from the bottom up to avoid getting stuck with broken code.

For starters we are going to check functionaility on the ADC/DAC cores from Pavel with the guides from Anton. After that we are going to test a simple biquad and will then go on to implement the IIR and/or CIC filters.

The new primary target is to get the Red Pitaya to run properly before picking up any filter work.

This will allow us to implement CIC filters properly without mistrusting the existing FPGA codebase.

This will most likely require to write some of the trigger functionality ourselves but that seems dooable and okay atm, but will be revisited later on.


Date: 23-03-2017
Authors: Noah

Today I compiled Antons example no 4. It worked just as expected without any issues, verifying that the AXI stream interfaces for the ADC and the RAM work as intended. This is very good news since we can now adapt this project for our uses without any worries. Sure there will be difficulties as usal but it sheds some good light on the entire cause.

This means that in the next few days I will be going to learn TCL and maybe some Verilog too, to better understand the cores by Pavel. I myself will write code in the supposedly superior VHDL.

I also managed to mount my IME home dir successfully onto my mac. Simply do

brew cask install osxfuse
brew install homebrew/fuse/sshfs

After the packages are installed, do

sshfs <host>: <mountpoint>

To mount the remote directory. Works great!


Date: 28-03-2017
Authors: Noah

The project setup part is close to completed. I conducted a lot of research how the device tree has to be structured and how the Block Design part has to be arranged.

I want to write a little about the workflow with Vivado, TCL, the Device Tree and the AXI Bus. I will do a full tutorial on the whole matter once the project is finished but the journal will for now suffice to remember the workflows.

Vivado

Vivado is the IDE Xilinx provides to write firmware for their FPGAs. It comes in a free WebPack edition and licensed ones which contain more functionality. For our more or less general purpose the free edition features suffice, even tho we got a licensed version form IME (our institute). It has an Editor to write VHDL/Verilog to describe the FPGA functionality just like you might know it. The editor feels very clunky and isn't really nice to work with. It is no problem at all to use your own editor to edit your VHDL.

Vivado also contains a Simulator to test your designs. It is decent and is based on the old ISE (Old Xilinx IDE) Simulator. I will later on, when I am into simulating pieces of hardware, write more about this part.

The really nice part about Vivado is the ability to script everything in TCL. Whilst I do not like TCL it is okay to interface Vivado. Litterally everything can be interfaced through TCL. The very generous Pavel Demin wrote some TCL scripts and Anton Potocnik did some nice tutorial series with them. We are basing on them and extend them to our use.

Even tho Vivado can be interfaced without running the GUI once, we are executing TCL scrips from within Vivado. And the reason why we do so brings us to the next plus of Vivado:

Imaging complex block structures and configuring proprietary hardware with litterally thousands of options is very hard. Luckily Vivado is your friend in that case: It has a really nice graphical representation display of all the blocks used in a project. Arranging them is a little clunky, but vivado tries it's best to rearange them themselves after adding new components and/or connections. Added cores with extended functionality and the configuration of the ZYNQ Core (to configure features like external DDR3 RAM or the AXI Bus Interfaces) can be done by double-clicking a core and browse through the options. It is a really nice way to browse and discover unknown features and all the possible options.

And the best thing about all of this: Whenever you add a component, draw a connection or edit a core, Vivado outputs the used TCL command in the console so you can conveniently and procedurally recreate the action and in the future add it directly to the TCL script! Awesome!

So to sum up: In our project we are going to create a project and block design using Vivado TCL commands. Editing VHDL will be done in an external editor of choice. Whilst emacs has one of the best and most advanced VHDL plugins, some might prefer Sigasi a free/paid plugin for Eclipse and I prefer VS Code, a general purpose Editor. Simulations will be done using Vivado.

Scripting in TCL with Vivado

Generally the official Vivado Scripting Guide and the article from Elias Kousk can be seen as a good intro to scripting Vivado. Most of the topics contents are still unknown to me.

To have a nice reference to TCl in general, the official references are quite good!

General TCL

In TCL an entered command is seen as a $\lambda$-expression. So the first argument entered is the procedure and the following ones are the arguments to that procedure. TCL has a so called substitution phase. In that phase names with a prefixed \$-sign are replaced by their value. To have a concept of string-like, in TCL arguments can be grouped with "". The substitution phase applies for contents of "" as well. If the substitution phase should not apply, {} can be used for grouping.

To explain in an example have a look at the following

# Assigns 'value' to var
set var value

# Outputs '\$var'
puts {\$var}

# Outputs 'value'
puts "\$var"

If a procedure should be evaluated inline, [] can be used to group it's arguments

# An example pulled from the tcl docs
puts [readsensor [selectsensor]]

Interfacing Vivado

Basically every command can be directly entered into the Vivado TCL Console. For convenience scripts can be executed using the source command

source script.tcl

The shell has awesome autocompletion. To get autocompletion for files when they reside in the current working directory, prefix the filename with ./

source ./script.tcl

To create a new project we first need to know what XILINX Product we are using. We remember it as variable part_name. This can be done with the command

set part_name xc7z010clg400-1

in TCL. Next we are actually gonna create the project and a top level a block design and initialize it with the Red Pitaya pin definitions with

# Create a new project
create_project $project_name build/$project_name -part $part_name -force

# Create a new block design for the toplevel
create_bd_design system

# Load the Red Pitaya ports specifications
source cfg/ports.tcl

Then we continue with loading all required HDL files into the project

# Load any additional Verilog and VHDL files in the project folder
set files [glob -nocomplain $project_name/*.v $project_name/*.sv $project_name/*.vhd]
if {[llength $files] > 0} {
  add_files -norecurse $files
}

Basically VHDL files only would suffice but maybe we will need 3rd-party (System) Verilog files. The glob command here will return a list of any files found matching the given patterns. -nocomplain prevents it from issuing an exception if no files were found.

After this, the basic project setup is done. IP blocks can then be added with the command

create_bd_cell -type ip -vlnv <vendor>:ip:<ip_name>:<version> <identifier>

Example:

create_bd_cell -type ip -vlnv xilinx.com:ip:processing_system7:5.5 processing_system7_0

Properties can then be defined with the following command

set_property -dict [list CONFIG.<PROPERTY_NAME> <VALUE>] [get_bd_cells <identifier_of_bd_cell>]

Example:

set_property -dict [list CONFIG.PCW_IMPORT_BOARD_PRESET {cfg/red_pitaya.xml}] [get_bd_cells processing_system7_0]

Date: 29-03-2017
Authors: Noah

Of course blocks have to be connected to each other. This can be done using the command

connect_bd_net [get_bd_pins $identifier_of_bd_cell_1/$identifier_of_pin_1] [get_bd_pins $identifier_of_bd_cell_2/$identifier_of_pin_2]

Example:

connect_bd_net [get_bd_pins processing_system7_0/M_AXI_GP0_ACLK] [get_bd_pins processing_system7_0/FCLK_CLK0]

After all the blocks have been placed and all connections have been made, we can finish up the project with

# Generates target data for the specified IP or Blockdesign
generate_target all [get_files  $bd_path/system.bd]

# Create a toplevel wrapper for the specified IP or Blockdesign
make_wrapper -files [get_files $bd_path/system.bd] -top

# Add the wrapper file to the project
add_files -norecurse $bd_path/hdl/system_wrapper.v

After this we have our project. From here we can run synthesis and compile a bitstream. YAY!

The AXI Bus

To enable AMBA AXI Bus is an on chip interconnect bus, specified by ARM. It enables the FPGA to "speak" with the ARM Core A9. There is option for memeory mapped IO (MMIO) or streaming between components. The AXI has a lot of features which are way too advanced for our scope. But we are going to use the AXI-Lite bus and the AXI-Stream bus.

Basically a AXI bus consists of data lanes (up to 128 Bit) and some control signals. The AXI-Lite a MMIO interface and is kinda advanced in comparison to the AXI-Stream interface and how it exactly works. Details can be read in the spec.

The AXI-Stream interface has just two mandatory signals: tdata and tvalid. Technically even only tvalid is required. In our example we are going to define new component that defines an AXI-Stream interface.

For that we create a normal VHDL module. No special requirements have to be met other than the module actually implementing an AXI-Stream "speaking" logic and a tdata and a tvalid signal. Those signals don't even have to be named like that, we can map them to the corresponding signals later.

The module could look like

entity axis_to_data_lanes is
port (
  AxiTDataxDI: in std_logic_vector(13 downto 0);
  AxiTValid: in std_logic;
  DataClkxCI: in std_logic;
  DataRstxRBI: in std_logic;
  DataxDO: out std_logic(13 downto 0);
  DataStrobexDO: out std_logic;
  DataClkxCO: out std_logic
);
end axis_to_data_lanes;

One can easily see that the module also has a reset and clock in. Both are required for AXI too of course but are actually not part of the AXI interface itself but throught the clock and reset interfaces. After the actual module has been created, Vivado somehow has to be told which signals actually belong to the AXI bus.

First we define a new AXI-Stream interface

ipx::add_bus_interface $interface_identifier [ipx::current_core]
set_property abstraction_type_vlnv xilinx.com:interface:axis_rtl:1.0 [ipx::get_bus_interfaces $interface_identifier -of_objects [ipx::current_core]]
set_property bus_type_vlnv xilinx.com:interface:axis:1.0 [ipx::get_bus_interfaces $interface_identifier -of_objects [ipx::current_core]]
set_property display_name $interface_name [ipx::get_bus_interfaces $interface_identifier -of_objects [ipx::current_core]]
set_property description $interface_description [ipx::get_bus_interfaces $interface_identifier -of_objects [ipx::current_core]]

After the interface has been added, ports have to be assigned to it and mapped to an AXI signal. This can be done like

# Define new port TDATA
ipx::add_port_map TDATA [ipx::get_bus_interfaces $interface_identifier -of_objects [ipx::current_core]]
set_property physical_name AxiTDataxDI [ipx::get_port_maps TDATA -of_objects [ipx::get_bus_interfaces $interface_identifier -of_objects [ipx::current_core]]]

# Define new port TVALID
ipx::add_port_map TVALID [ipx::get_bus_interfaces $interface_identifier -of_objects [ipx::current_core]]
set_property physical_name AxiTValid [ipx::get_port_maps TVALID -of_objects [ipx::get_bus_interfaces $interface_identifier -of_objects [ipx::current_core]]]

As you can see, VHDL port names don't matter as they can easily be remapped. Ports could also be auto assigned by properly naming them but I don't like the automagic and weird signal names in VHDL, so I rather do it the manual but obvious way. Clock and Reset interfaces are done analogous. What properties they can contain can easily be seen by adding them in Vivado and observe the TCL commands printed to the console and the TCL command reference guide provided by Xilinx.

The device tree

Ok, so we got the hardware part done. But how does the Arm Core A9 actually get data from the hardware? This can easily be done using /dev/mem on Linux. But this approach is kinda opaque and not really safe. So a better but way harder and more complicated approach is to write a kernel module. How a kernel module is explained another day. What should be explained here is how the device tree works. Linux needs a way to figure which module should be loaded for what piece of hardware. That's why there is the device tree. The device tree is a data structure that is loaded at boot and tells the Linux what driver should be loaded and what memory region and interrupts it's responsible for.

How the device tree is structured is pretty well explained by the Xillybus team. Also the Android sources can provide some insight.

Xilinx already provides a .dtsi for the ZYNQ7. A .dtsi is basically an include file to a .dts. Also the Linux, provided by Pavel Demin, which we are going to use already features a basic dts. (the Device Tree Source file). So we just have to insert our piece of the tree that maps our AXI-Lite MMIO interface in the device tree and loads the kernel module for it.

For the ZYNQ Logger it looks like this

/* ZYNQ Logger */
zynq_logger0: zynq_logger@43c40000 {
    compatible = "zynq_logger";
    reg = <0x43c40000 0x00010000>; /* use 64kByte addess space for the core's registers */
    /* configure which interrupt line is used */
    interrupt-parent = <&intc>;
    interrupts = <0 31 4>;  // 29: F2P[0]
    /* add more parameters here as needed */
};

For now this has to suffice as I am really tired and tomorrow is another day too (or rather today ...).


Date: 29-03-2017
Authors: Noah

Since it was kinda unsure what hardware we actually have on a Red Pitaya I asked Pavel Demin and he pointed me to this.

This seems to be pretty much up to date and complete.

When I found out that I missread all the docs I read and the Red Pitaya actually only has 512MB RAM instead of the assumed 4GB I was shocked. For our purpose this should be sufficient, but 512MB for a Linux and recorded data? WTF?!

Let's do a little calculation. Assuming we record at the possible 125MS/s, recording Data for one second on 2 channels, that would be 250MS/s. Now we have to consider that the adc data is actually 14 bits which gets ceiled to 16 bits, that's again times 2. Leading us to 500MB/s. Knowing that a Linux that wont instantly lag upon confronting it with small load should have 256MB at disposal, we could record half a second of data. Now one could say: well yeah but you could stream the data to a faster machine, right? Assuming we have 500MB/s, that is 4Gb/s. The Red Pitaya has 1Gb/s Ethernet like every normal PC. So instantly forget about streaming the data to another machine.

For our purposes we only require data rates of let's say 48kHz at a maximum. That would result in 192kB/s. So for a whopping 5 minutes of audio, we would need $48 \cdot 4 \cdot 5 \cdot 60 = 57.6$MB! That's already 10% of our total amount of RAM! Luckily this can easily be streamed to a PC. So the Server/Scope will have to feature streaming.


Date: 29-03-2017
Authors: Raphael

general

Acquired dual wide screen HD monitors and customized Vim and Terminal on IME computer. Working on that platform should now become significantly more pleasurable.

mtech shell customisation

  • Create symlink in ~/svn directory: ln -s ~/svn/git ~/pitaya
  • create file ~/pitaya/mgc/mtech_dev_technos
  • add desired mtech shell environments to that file
  • file ~/pitaya/mgc/mtech_dev_config will have been created. Edit: REPO_NAME=2016_HS_P5_P6_FESP

File: mtech_dev_technos

xilinx_vivado_2016.2

logging level upon mtech shell open: $ mml [1,2,3]: set level to [default,verbose,debug] when opening the next shell

We did a first try on compiling with SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin

Device Tree Compiler

Binary dtc is not available by default. Download from Kernel.org

Also relevant:

Trying to build LED blinker project by Pavel Demin. Still working on getting things to compile. Problems, in order of occurrence:

  • hsi from Xilinx SDK not in PATH -> mtech shell fix
  • arm-linux-gnueabihf-gcc not in PATH -> /shared/eda/lnx_exe/xilinx_vivado_2016.2/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linus-gnueabi/bin/

Date: 07-04-2017
Authors: Raphael

Build VM

Thanks to Noah, we now have a build VM running Ubuntu and Vivado, following Pavel Denim's instructions (mostly).

Building led_blinker

I have successfully managed to build led_blinker (I think).

This required the following steps:

As per Pavel's instructions, after the VM is set up install the following packages:

sudo apt-get --no-install-recommends install \
  build-essential git curl ca-certificates sudo \
  libxrender1 libxtst6 libxi6 lib32ncurses5 \
  bc u-boot-tools device-tree-compiler libncurses5-dev \
  libssl-dev qemu-user-static binfmt-support \
  dosfstools parted debootstrap zerofree

And because there is no gmake on ubuntu:

sudo ln /usr/bin/make /usr/bin/gmake

Cloning his repo and building:

git clone https://github.com/pavel-demin/red-pitaya-notes
cd red-pitaya-notes
source /opt/Xilinx/Vivado/2016.4/settings64.sh
make NAME=led_blinker all

resulted in errors about missing libstdc++.so.6 and eabi compilers etc.

PATH has to be amended thusly (the Xilinx toolchain is installed in /vagrant/Xilinx):

# ~/.bashrc:

PATH=$PATH:/vagrant/Xilinx/SDK/2016.2/bin:/vagrant/Xilinx/Vivado/2016.2/bin:/vagrant/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-none-eabi/bin:/vagrant/Xilinx/SDK/2016.2/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin

Besides that, two additional packages are required

apt-get install lib32z1 lib32stdc++6

After that:

# in red-pitaya-notes directory
make NAME=led_blinker all
sudo sh scripts/image.sh scripts/ubuntu.sh red-pitaya-ubuntu.img 1024

NOTE: The scripts/ubuntu.sh file requires a fix:

Change root_tar=ubuntu-base-14.04.5-core-armhf.tar.gz to root_tar=ubuntu-base-14.04.4-core-armhf.tar.gz, because version 14.04.5 does not exist for core-armhf.

For the time being, the project refuses to compile on Noah's machine. We're trying to debug that by comparing packages and package versions across our virtual machines.

I have amended the ansible configuration file to include all of Pavel's listed packages, as well as lib32z1 and lib32stdc++6.

I have also added the .bashrc with the correct PATH variable to the repository.

Building Pulsed Nuclear Magnetic Resonance

See here:

http://pavel-demin.github.io/red-pitaya-notes/pulsed-nmr/

make NAME=pulsed_nmr tmp/pulsed_nmr.bit
arm-linux-gnueabihf-gcc -static -O3 -march=armv7-a -mcpu=cortex-a9 -mtune=cortex-a9 -mfpu=neon -mfloat-abi=hard projects/pulsed_nmr/server/pulsed-nmr.c -o pulsed-nmr -lm
source helpers/pulsed-nmr-ecosystem.sh

Also seems to work, as far as I can tell without loading the image onto the Pita.

alpenthesis LaTeX Class

I have started implementing our LaTeX class. See https://github.com/alpenwasser/alpenthesis/

Not an entirely terrible day. Now we "just" need to get a Linux image and bitstream onto the Pita. :-)


Date: 08-04-2017
Author: Raphael

led_blinker Progress

Successfully loaded led_blinker onto an SD card for the Pitaya and got it to boot. Successfully pinged it and connected via SSH.

Conclusion: The Linux built via Pavel's Makefile seems to be functional. Yay!

Date: 10-05-2017
Author: Raphael

Compiling the FPGA Project from Scratch

In git repository top level:

git submodule init
git submodule update

In firmware/fpga

make all-cores
make zynq_logger
make project

Or alternatively just

make all

Afterwards:

cd build/src
vivado src.xpr
This will run for a while.

Date: 24-05-2017
Author: Raphael

Notes on Setting Up the Toolchain

I set up the toolchain on my desktop from scratch last week. These are some (not very structured) notes I took during that process.

NOTE: Relative paths are always in relation to the repository root. Example:

firmware/fpga

would be

/absolute/path/to/the/repository/firmware/fpga

Initial Setup

git clone https://github.com/alpenwasser/pitaya
cd pitaya
git submodule init
git submodule update

Setting up the Buildbox

NOTE: Buildbox always referes to the guest virtual machine.

Installing Prerequisites

  • vagrant
  • ansible
  • virtualbox

It might be necessary to load the virtualbox kernel modules if you don't want to reboot your machine before being able to use virtualbox:

sudo vboxreload

Creating the Buildbox

Change in to the env directory:

cd env
vagrant up

This will download the Ubuntu ISO (2014.04 LTS, because that is officially supported by Vivado), and install the additional packages as specified by the ansible task list in env/roles/common/tasks/main.yml. The GUI environment will take a while to install even though the VM might already be running. Don't be alarmed by that, just let it do its thing.

Once the process is finished, reboot the VM from the host machine in env/ via:

vagrant halt
vagrant up

Make desired adjustments in the buildbox to your liking (resolution, additional packages like ZSH, dotfiles, etc.). Don't forget to amend the $PATH variable with the Xilinx tool chain if you use your own shell .rc file (the provided .bashrc already does this; check there for the path information you will need).

Account Info

USERNAME: vagrant

PASSWORD: vagrant

NOTE: Tasks which require admin privileges from the graphical user interface will prompt for the user ubuntu 's credentials (I reckon gksu has not been set up to ask for vagrant 's information). To solve this, change the password for the user ubuntu (you can either make it empty or to something of your liking):

vagrant $ sudo passwd ubuntu

gmake

Ubuntu does not come with gmake, but Vivado will need it. Create a symbolic link to make:

sudo ln -s make /usr/bin/gmake

Shared Folders

There are two shared folders: /vagrant/: Points to env/ on the host, and /repo, which points to the repository's root directory on the host.

Setting Up Vivado

Make an Account on Xilinx.com.

NOTE: Filling out FHNW for the Corporation works, and Market I went with testing and measurement.

Download the installer for Vivado Design Suite - HLx Editions from: https://www.xilinx.com/support/download.html

You will need Version 2016.2, which is under Archive. I don't recommend downloading the full 20 GB installer; the Web installer for Linux will do just fine.

Go to the Download directory and execute the installer:

./Xilinx_Vivado_SDK_2016.2_0605_1_Lin64.bin

Go through the install process as here. Note that we instlal it to the /vagrant/ directory, which is a shared folder with the host machine. This prevents a hugely bloated VM disk image.

Building the Project on the Buildbox

First, clone the repository and initialize the submodules on the host machine in a non-shared folder, as above (uboot cannot build within a shared folder on a Virtualbox VM, so we cannot use the /repo shared folder):

git clone https://github.com/alpenwasser/pitaya
cd pitaya
git submodule init
git submodule update

Then build:

cd firmware
make init

This will build the Vivado project, the ARM GNU/Linux OS and the server application to be run on the Pitaya.

Random Notes

Updating the Submodules to a New Commit

git submodule update --remote
git add dir/to/submodule
git commit 
git push

Tracking a New Remote Branch Locally

If a new branch has been created by somebody else and been pushed to the remote, and you'd like to checkout and track that branch on a machine which does not so far have it:

git checkout --track -b newbranch origin/newbranch

http://stackoverflow.com/questions/1030169/easy-way-pull-latest-of-all-submodules

https://subfictional.com/fun-with-git-submodules/

http://stackoverflow.com/questions/5767850/git-on-custom-ssh-port

http://stackoverflow.com/questions/3596260/git-remote-add-with-other-ssh-port