SysLink 02.00.00.56 c6x 01 alpha2 InstallGuide

Download and Install Syslink & dependent components
Download SysLink tarball from

Download SysLink dependencies. Use Linux self install binary and follow instructions from the component release notes. For CGTOOLS, create folder c6000_7.2.0B2 under /my-linux-c6x/. For all install binaries, do chmod +x before executing. For CGTOOLS, choose the folder c6000_7.2.0B2 during install. For other components, choose the folder my-linux-c6x and installation program will install the component under a sub folder of my-linux-c6x. For example ipc install binary will install ipc under my-linux-c6x/ipc_1_22_00_19

Install all of the dependent components under /my-linux-c6x/ as per directory structure given below. Install SysLink as follows:- cd /my-linux-c6x/ tar -xvzf syslink_02_00_00_56_c6x_01_alpha2.tgz

.

HLOS builds
HLOS (High Level OS) here refers to linux-c6x. The build host is assumed to be running RedHat Linux 4.0.

1. Setup build environment for HLOS build
It is assumed that the Linux Host machine used for build has been setup to build linux-c6x-project. If not, click here Main_Page. This sections describes how to setup the Linux build environment for building SysLink kernel module, sample kernel modules, user land libraries and sample user land applications.

First edit setenv in the linux-c6x-project directory (Assumes that linux-c6x product build is complete. So setenv is already present in the directory). Select the target platform. In the example C6474 (Faraday) is selected. If you are building for C6472 (Tomahawk) SYSLINK_PLATFORM to C6472


 * setenv file

#SysLink target platform for build export SYSLINK_PLATFORM=C6474 #export SYSLINK_PLATFORM=C6472 # SysLink install directory export SYSLINK_ROOT=$LINUX_C6X_TOP_DIR/syslink_02_00_00_56_c6x_01_alpha2 # IPC package install directory export IPC_DIR=$LINUX_C6X_TOP_DIR/ipc_1_22_00_19 # Below for rtos build # BIOS export BIOS_DIR=$LINUX_C6X_TOP_DIR/bios_6_31_00_18 # XDC export XDC_DIR=$LINUX_C6X_TOP_DIR/xdctools_3_20_05_76

2. Build HLOS kernel modules and user samples
SysLink build targets are available in linux-c6x-project Makefile (actually includes Makefile.syslink) to allow building of SysLink kernel module, sample kernel modules and user sample applications (both HLOS and RTOS) from the top level project directory. To build SysLink, do the following:- cd my-linux-c6x/linux-c6x-project source setenv make syslink-help - Display all available build targets make syslink-all - Builds SysLink kernel modules, user sample applications and rtos samples make syslink-clean - Clean up every thing created during SysLink build

The binaries created are installed under product/syslink_[target-platform] directory, where target-platform is either C6472 or C6474 cd my-linux-c6x/product/syslink_C6472 ls gatempapp.exe gatempapp.ko heapbufmpapp.exe heapbufmpapp.ko heapmemmpapp.exe heapmemmpapp.ko listmpapp.exe listmpapp.ko messageqapp.exe messageqapp.ko notifyapp.exe notifyapp.ko procmgrapp.exe sharedregionapp.exe sharedregionapp.ko syslink.ko

The rtos applications are available with *.x64P suffix. For example notify application to run on Faraday core0 will show as notify_c6474_core0.x64P and on core1 as notify_c6474_core1.x64P To build kernel modules and user samples separately, following steps can be used:-

2.a. Build SysLink & sample kernel modules
cd my-linux-c6x/linux-c6x-project source setenv make syslink-kernel - Builds syslink kernel module and sample modules make syslink-kernel-clean - Clean up syslink generated build files and binaries

2.b. Build HLOS user space sample applications
cd my-linux-c6x/linux-c6x-project source setenv make syslink-user - Builds syslink user space sample applications make syslink-user-clean - Clean up build generated files

RTOS Builds
RTOS (Real Time OS) used is SyS/BIOS. This instruction assumes that CGTOOLS required is already installed under /my-linux-c6x/ c6000_7.2.0B2 directory and the build host is RedHat Linux 4.0.

1. Building from top level linux-c6x-project directory
The RTOS sample applications are build automatically when make syslink-all is invoked from the linux-c6x-project directory. To build only rtos sample applications, do

make syslink-rtos

To do cleanup invoke

make syslink-rtos-clean

Below procedures are required only if you are building individual samples manually.

2. Setup build environment for manual RTOS builds
RTOS builds are done using the xdc build tool. Samples source code uses IPC package for providing IPC services to application. The sample applications can be seen at $SYSLINK_ROOT/ti/syslink/samples/rtos/. There is a sample for each of the IPC module or utility module tested. NOTE: All steps assumes that the top level directory is /my-linux-c6x/. So provide absolute path based on where your top level directory is located.

Setup path of xdc tool in the PATH environment variable. For example you would do the following to setup this path:- - export PATH=$PATH:/my-linux-c6x/xdctools_3_20_05_76 Next setup the XDCPATH environment variable. - export XDCPATH="/my-linux-c6x/bios_6_31_00_18/packages;/my-linux=c6x/ipc_1_22_00_19/packages" Make sure that the component release folder name match with that installed under my-linux-c6x folder. - export SYSLINK_ROOT=/my-linux-c6x/syslink_02_00_00_56_c6x_alpha2 Make sure that the component release folder names used in the above export commands match with that installed under my-linux-c6x folder. Edit the config.bld under $SYSLINK_ROOT/config.bld - set the var rootDirPre to root directory of CGTOOLS folder (absolute path) # var rootDirPre = "/my-linux-c6x/"; # - set C64P_COFF.rootDir as C64P_COFF.rootDir = rootDirPre + "c6000_7.2.0B2" + rootDirPost; The above two will define the absolute path of the folder where CGTOOS are installed. While installing CGTOOLS, make sure the folder name (in this case c6000_7.2.0B2) is choosen to match with the CGTOOLS release version. Set the target platforms to build as (shows for C6474) C64P_COFF.platforms = C64P_COFF.platforms = [ "ti.syslink.samples.rtos.platforms.evm6474.core0", "ti.syslink.samples.rtos.platforms.evm6474.core1", ];

Examples of config.bld are given below:-

* For C6474 /* *  ======== config.bld ======== * Sample Build configuration script */ // ON Linux Host. Modify this to suit your tool chain paths //var rootDirPre = "/opt/DSPLINK/"; var rootDirPre = "/my-linux-c6x/"; var rootDirPost = ""; //********************* Setup for C64P target for COFF************************* var C64P_COFF = xdc.useModule('ti.targets.C64P'); //C64P_COFF.rootDir = rootDirPre + "c6000_7.2.0A10232" + rootDirPost; //C64P_COFF.rootDir = rootDirPre + "TI_CGT_C6000_6.1.5" + rootDirPost; C64P_COFF.rootDir = rootDirPre + "c6000_7.2.0B2" + rootDirPost; C64P_COFF.ccOpts.suffix += " -mi10 -mo "; //set default platform and list of all interested platforms for c64P COFF C64P_COFF.platforms = [ //               "ti.platforms.evm3530:dsp", "ti.syslink.samples.rtos.platforms.evm6474.core0", "ti.syslink.samples.rtos.platforms.evm6474.core1", ];

* For C6472 /* *  ======== config.bld ======== * Sample Build configuration script */ // ON Linux Host. Modify this to suit your tool chain paths //var rootDirPre = "/opt/DSPLINK/"; var rootDirPre = "/my-linux-c6x/"; var rootDirPost = ""; //********************* Setup for C64P target for COFF************************* var C64P_COFF = xdc.useModule('ti.targets.C64P'); //C64P_COFF.rootDir = rootDirPre + "c6000_7.2.0A10232" + rootDirPost; //C64P_COFF.rootDir = rootDirPre + "TI_CGT_C6000_6.1.5" + rootDirPost; C64P_COFF.rootDir = rootDirPre + "c6000_7.2.0B2" + rootDirPost; C64P_COFF.ccOpts.suffix += " -mi10 -mo "; //set default platform and list of all interested platforms for c64P COFF C64P_COFF.platforms = [ //               "ti.platforms.evm3530:dsp", "ti.syslink.samples.rtos.platforms.evm6472.core0", "ti.syslink.samples.rtos.platforms.evm6472.core1", "ti.syslink.samples.rtos.platforms.evm6472.core2", "ti.syslink.samples.rtos.platforms.evm6472.core3", "ti.syslink.samples.rtos.platforms.evm6472.core4", ];

3. Building RTOS sample applications
To build all sample applications, do  cd $SYSLINK_ROOT xdc all XDCBUILDCFG="$SYSLINK_ROOT/config.bld" -PR. To build sample applications individually, do  cd $SYSLINK_ROOT/ti/syslink/ipc xdc all -PR. cd $SYSLINK_ROOT/ti/syslink/samples/rtos/platforms xdc all -PR. For example to build any sample application, do    >cd $SYSLINK_ROOT/ti/syslink/samples/rtos/ >xdc all Where module may be notify, sharedRegion, messageQ etc. To clean up all, do  xdc clean XDCBUILDCFG="$SYSLINK_ROOT/config.bld" -PR. To clean up individual modules. cd $SYSLINK_ROOT/ti/syslink/samples/rtos/ xdc clean

The BIOS application executables are created with suffix .x64P under $SYSLINK_ROOT/ti/syslink/samples/rtos/ /ti_syslink_samples_rtos_platforms_ _core0 /debug/ Where module may be notify, messageq etc.  For example, look for notify_core0.x64P under $SYSLINK_ROOT/ti/syslink/samples/rtos/ notify/ti_syslink_samples_rtos_platforms_evm6474_core0/debug/ and notify_core1.x64P under $SYSLINK_ROOT/ti/syslink/samples/rtos/notify /ti_syslink_samples_rtos_platforms_evm6474_core1/debug/ To copy all executables to a destination folder, you may execute following linux command on the host machine cp `find. -name *.x64P`

Running multi-core sample applications
The multi-core applications has two components. One part of the application runs on the Linux Host (referred to as HLOS sample application) and the other part runs on the slave cores (referred to as RTOS or SyS/BIOS IPC sample application). Host is also known as the master core and others as slave cores. On each core, the application initializes all resources used by the application and also initializes the IPC. It then execute a set of APIs calls to invokes the services of the module under test. So running multi-core sample application involves running of the above two sample applications. The SyS/BIOS IPC sample application calls APIs from IPC package. The master core runs SysLink sample application under Linux. IPC module of both sample applications communicate using Shared Memory as transport and IPC hardware interrupt. The application on the slave core waits for the master core to initiate the test.

1. Test configuration
On Faraday EVM, the Linux Host runs on Core2 and BIOS applications run on Core0 and Core1 to  demonstrate IPC between Host and BIOS cores. Similarly for Tomahawk, Core5 runs Linux Host and Core0-4 runs BIOS applications. Loading of slave cores (Core0 and Core1 on Faraday and Core0-4 on Tomahawk) and Linux Host core (core with highest core id) are done manually using CCS or using Debug Server Scripts (follow steps 2.a or 2.b below). The IPC requires shared memory between cores as transport. On Faraday, upper 16M of the DDR2 is used for this. This requires bootarg variable mem=112M to be set in kernel bootargs to  reserve the upper 4M for SysLink. On Tomahawk, SL2 is used as shared memory. So this is not necessary.

2.a Loading and running Linux kernel and RTOS samples using CCS
This assumes that the CCS is installed on the machine running MS Windows. If not, click here [] to install CCS.

First step is to load and run the the Linux on the Linux Host core (core with highest core id). Follow procedure at [] for C6472 or [] for C6474. Next step is to load and run bios sample application executable on slave cores. SyS/BIOS IPC sample application executable are in COFF formats and are named as  .x64P. So pick executable with core_id 0 for core0, core_id 1 for core1 and so forth. Steps to load and run the application is same as that for Linux given above. At end of these steps, all cores are up running. IP address of the Linux Host's ethernet interface can be obtained from the CIO console log. This would be required when running the SysLink sample application.

2.b Loading and running Linux kernel and RTOS sample applications using DSS script
Loading and running of master and slave cores using DSS script saves time. DSS comes installed as part of CCS. DSS also can be installed as a standalone package. Read more about DSS at []. dss.bat can be found under c:\Program Files\Texas Instruments\ccsv4\scripting\bin. A sample script, SysLinkTest.js, is provided under linux-c6x-project/scripts/syslink/syslinktest.js for this purpose. This procedure assumes the evm is connected to windows machine using the emulator cable and powered on. To load and run cores do the following on the windows machine:- 1) create a scripts folder and copy syslinktest.js to the folder 2) create 3 folders under scripts a) syslink\configs      b) syslink\images\evm6486 c) syslink\images\evm6488      d) syslink\logs 3) copy target specific CCS .ccxml under configs. This will be created by ccs from view->target     configurations menu as part of loading Linux through CCS procedure. Rename the files as      evmTCI6486.ccxml or evmTCI6488.ccxml. (evmTCI6486 for target C6472 and evmTCI6488 for C6474)  4) copy SyS/BIOS IPC images (with .x64P suffix) to syslink\images\evm6486 for Tomahawk or      syslink\images\evm6488 for Faraday. Also copy vmlinux- from product directory to    the above directory and rename the file to vmlinux 5) set SYSLINK_TEST_DIR env variables.    open a command shell.     execute set SYSLINK_TEST_DIR=\syslink  6) Before running your script, syslinktest.js, please make sure the name of the emulator target name is same as that used in your CCS. So launch debug sessions to your board once through CCS and make sure the name match with that shown on CCS. For example, for CCS 4.2.x release, change the script for the emulator name as         debugSession = debugServer.openSession("*","C64XP_1A"); debugSession1 = debugServer.openSession("*","C64XP_1B"); and so forth for Tomahawk ===== Affected code cut-n-pasted from syslinktest.js ===== timeout_linux_core = 45000; debugSession = debugServer.openSession("*","C64XP"); debugSession1 = debugServer.openSession("*","C64XP_1"); debugSession2 = debugServer.openSession("*","C64XP_2"); debugSession3 = debugServer.openSession("*","C64XP_3"); debugSession4 = debugServer.openSession("*","C64XP_4"); debugSession5 = debugServer.openSession("*","C64XP_5"); =======================================================================    cd c:\Program Files\Texas Instruments\ccsv4\scripting\bin dss.bat \syslinktest.js TCI6486_USB notify LE. This will load and run vmlinux on     Host core and notify sample exe files on slave cores. For testing other SysLink modules, check the name to use from the script file. 7) When the script completes loading and running, following will be displayed :-    "Type any key once syslink test is complete". Also shows "********IpcResetVector for     notify is 0x880c00" which will be required when running the SysLink sample application.

3. Running SysLink HLOS sample applications
To run SysLink sample applications, following are required:- - .exe and .ko files from HLOS build above. - IP address of the Linux Host's Ethernet interface (displayed in CIO console    of Linux Host core as part of Linux Kernel boot up log) - Ipc_ResetVector address. If DSS script is used for loading and running cores, then this is displayed on the DSS console. If CCS is used, then following command can be used from a shell at $SYSLINK_ROOT after the SyS/BIOS IPC sample build. - find. -name *.map | xargs grep Ipc_ResetVector - or open the map file and search for the string. For example for running notify, open $SYSLINK_ROOT/ti/syslink/samples/rtos/notify/package /cfg/ti_syslink_samples_rtos_platforms_evm6474_core0/debug/notify_c6474_core0.x64P.map

3.a Running user land sample application
To run the application (assumed faraday) 1) do chmod +x /my-linux-c6x/product/syslink_C6474/*.exe  2) Telnet to the evm using IP address of Ethernet interface mkdir /var/local/test 3) If using nfs for rootfs, copy *.ko and *.exe files to /var/local/test       (if not using nfs, tftp the files to the folder). At the shell execute      >insmod syslink.ko      create syslink device nodes manually by running following commands (cut-n-paste)       mkdir /dev/syslinkipc/      mknod -m 777 /dev/syslinkipc/Osal c 253 12       mknod -m 777 /dev/syslinkipc/Ipc c 253 11       mknod -m 777 /dev/syslinkipc/ProcMgr c 253 0       mknod -m 777 /dev/syslinkipc/Notify c 253 1       mknod -m 777 /dev/syslinkipc/MultiProc c 253 10       mknod -m 777 /dev/syslinkipc/NameServer c 253 2      mknod -m 777 /dev/syslinkipc/SharedRegion c 253 3      mknod -m 777 /dev/syslinkipc/HeapBufMP c 253 4      mknod -m 777 /dev/syslinkipc/HeapMemMP c 253 5      mknod -m 777 /dev/syslinkipc/HeapMultiBuf c 253 6      mknod -m 777 /dev/syslinkipc/ListMP c 253 7       mknod -m 777 /dev/syslinkipc/GateMP c 253 8 mknod -m 777 /dev/syslinkipc/MessageQ c 253 9 mknod -m 777 /dev/syslinkipc/SyslinkMemMgr c 253 13 mknod -m 777 /dev/syslinkipc/ClientNotifyMgr c 253 14 mknod -m 777 /dev/syslinkipc/FrameQBufMgr c 253 15 mknod -m 777 /dev/syslinkipc/FrameQ c 253 16 mknod -m 777 /dev/syslinkipc/RingIO c 253 17 4. At the target shell, execute the sample application (.exe file) for the SysLink module under test. This should match with the SyS/BIOS IPC sample running on slave cores. i.e if notify Sample is running, then execute notifyapp.exe on this session. >notifyapp.exe 0x80c800 5) Application should exit after running.   6) To run any other application, invoke the sample application with corresponding Ipc_ResetVector address for the RTOS sample application.

3.a.1 Sample logs on Linux Host for notify running on C6474
spawn telnet 158.218.100.193 Trying 158.218.100.193... Connected to 158.218.100.193. Escape character is '^]'. 158.218.100.193 login: root /root # cd /var/local/c6474_12_13 /var/local/c6474_12_13 # insmod syslink.ko /var/local/c6474_12_13 # ./notifyapp.exe 0x80c800 NotifyApp sample application Entered NotifyApp_startup Entered ProcMgrApp_startup_manual attaching procId = 0 ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After load: ProcMgr_getState state [0x4] ProcMgr_start LOADCALLBACK passed [0x0] Ipc_CONTROLCMD_STARTCALLBACK: After start: ProcMgr_getState state [0x4] Leaving ProcMgrApp_startup Entered ProcMgrApp_startup_manual attaching procId = 1 ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After load: ProcMgr_getState state [0x4] ProcMgr_start LOADCALLBACK passed [0x0] Ipc_CONTROLCMD_STARTCALLBACK: After start: ProcMgr_getState state [0x4] Leaving ProcMgrApp_startup Registered local event number 10 with Notify module for processor 2 Registered local event number 11 with Notify module for processor 2 Registered remote event number 10 with Notify module for processor 0 Registered remote event number 11 with Notify module for processor 0 Registered remote event number 10 with Notify module for processor 1 Registered remote event number 11 with Notify module for processor 1 Leaving NotifyApp_startup. Status [0x0] Entered NotifyApp_execute Sending events to local processor Sent 0 events to event ID 10 to local processor 2 Sent 0 events to event ID 11 to local processor 2 Sent 100 events to event ID 10 to local processor 2 Sent 100 events to event ID 11 to local processor 2 Sending events to CORE0 Received 100 events for event ID 10 from processor 2 Received 100 events for event ID 11 from processor 2 Sent 0 events to event ID 10 to remote processor 0 Received 200 events for event ID 10 from processor 2 Received 200 events for event ID 11 from processor 2 Sent 0 events to event ID 11 to remote processor 0 Received 100 events for event ID 10 from processor 0 Received 100 events for event ID 11 from processor 0 Sent 100 events to event ID 10 to remote processor 0 Sent 100 events to event ID 11 to remote processor 0 Received 200 events for event ID 10 from processor 0 Received 200 events for event ID 11 from processor 0 Sending events to CORE1 Sent 0 events to event ID 10 to remote processor 1 Sent 0 events to event ID 11 to remote processor 1 Received 100 events for event ID 10 from processor 1 Received 100 events for event ID 11 from processor 1 Sent 100 events to event ID 10 to remote processor 1 Sent 100 events to event ID 11 to remote processor 1 Leaving NotifyApp_execute Wait till 200 notifications each on 2 events are received from all 3 slave cores, and then press enter to continue ... Received 200 events for event ID 10 from processor 1 Received 200 events for event ID 11 from processor 1 === Presss Enter when test is complete === Entered NotifyApp_shutdown Unregistered local event number 10 with Notify module. Status [0x0] Unregistered local event number 11 with Notify module. Status [0x0] Unregistered remote event number 10 with Notify module. Status [0x0] Unregistered remote event number 11 with Notify module. Status [0x0] Unregistered remote event number 10 with Notify module. Status [0x0] Unregistered remote event number 11 with Notify module. Status [0x0] Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x97d2000] ProcMgr_detach status: [0x6a85000] After detach: ProcMgr_getState state [0x0] ProcMgr_close status: [0x0] Leaving ProcMgrApp_shutdown
 * 1) telnet 158.218.100.193

Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x97d2000] ProcMgr_detach status: [0x6a85000] After detach: ProcMgr_getState state [0x0] ProcMgr_close status: [0x0] Leaving ProcMgrApp_shutdown

Leaving NotifyApp_shutdown

3.a.2 Sample logs on Windows Host from DSS console
C:\Program Files\Texas Instruments\ccsv4\scripting\bin>dss.bat c:\project\linux- dsp\scripts\SysLinkTest.js TCI6488_USB notify LE Test selected is notify TCI6488_USB LE Start SysLink test Little Endian @ 2010_12_17_151917 C64XP_1A: GEL Output: PLL1 has been configured. C64XP_1A: 2: GEL StartUp Complete (Primary Core). C64XP_1B: 2: GEL StartUp Complete. C64XP_1C: 2: GEL StartUp Complete. Loading linux program C:\project\linux-dsp\scripts\syslink_12_13\images\evm6488\ vmlinux Loading Linux program C:\project\linux-dsp\scripts\syslink_12_13\images\evm6488\ vmlinux C64XP_1C: GEL Output: Turn off cache segment DEBUG: Loading successful for linux core... C64XP_1C: GEL Output: Disable EDMA events Linux version 2.6.34-evmc6474.el-20101207 (a0868495@gtcs13.gt.design.ti.com) (gc c version 3.2.2) #3 Tue Dec 7 17:38:31 EST 2010 Designed for the EVM6474 board, Texas Instruments. CPU2: C64x+ rev 0x10, 1.2 volts, 1000MHz Initializing kernel physical RAM map changed by user no initrd specified Built 1 zonelists in Zone order, mobility grouping on. Total pages: 28448 Kernel command line: mem=112M console=cio root=/dev/nfs rw nfsroot=158.218.100.1 79:/local/mkaricheri/target-dsp-elf ip=dhcp PID hash table entries: 512 (order: -1, 2048 bytes) Dentry cache hash table entries: 16384 (order: 4, 65536 bytes) Inode-cache hash table entries: 8192 (order: 3, 32768 bytes) Memory available: 110584k/111592k RAM, 0k/0k ROM (557k kernel code, 86k data) SLUB: Genslabs=13, HWalign=128, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 Hierarchical RCU implementation. RCU-based detection of stalled CPUs is enabled. NR_IRQS:192 console [cio0] enabled Console: colour dummy device 80x25 Calibrating delay loop... 997.37 BogoMIPS (lpj=1994752) Mount-cache hash table entries: 512 C64x: 16 gpio irqs NET: Registered protocol family 16 RIO: register sRIO controller for hostid 0 TCI648x RapidIO driver v2.1 RIO: setting EDMA threshold to 0xffffffff bio: create slab  at 0 Switching to clocksource TSC64 NET: Registered protocol family 2 IP route cache hash table entries: 1024 (order: 0, 4096 bytes) TCP established hash table entries: 4096 (order: 3, 32768 bytes) TCP bind hash table entries: 4096 (order: 3, 32768 bytes) TCP: Hash tables configured (established 4096 bind 4096) TCP reno registered UDP hash table entries: 128 (order: 0, 4096 bytes) UDP-Lite hash table entries: 128 (order: 0, 4096 bytes) RPC: Registered udp transport module. RPC: Registered tcp transport module. RPC: Registered tcp NFSv4.1 backchannel transport module. NET: Registered protocol family 1 eth0: EMAC(0) driver version 2.1 IRQ=6 queue=2 eth0: MAC address=00:24:ba:3a:22:cc PHY=SGMII MCORE: create SRAM, core=0, start=0x10800000 size=0x100000 MCORE: create DDR, core=0, start=0x80000000 size=0x80000000 MCORE: create SRAM, core=1, start=0x11800000 size=0x100000 MCORE: create DDR, core=1, start=0x80000000 size=0x80000000 MCORE: create SRAM, core=2, start=0x12800000 size=0x100000 ROMFS MTD (C) 2007 Red Hat, Inc. msgmni has been set to 215 Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254) io scheduler cfq registered (default) io scheduler deadline registered io scheduler noop registered Generic platform RAM MTD, (c) 2004 Simtec Electronics uclinux[mtd]: RAM probe address=0x80306000 size=0x0 Creating 1 MTD partitions on "RAM": 0x000000000000-0x000000000000 : "ROMfs" mtd: partition "ROMfs" is out of reach -- disabled console [netcon0] enabled netconsole: network logging started brd: module loaded loop: module loaded at24 1-0050: 131072 byte 24c1024 EEPROM (writable) TCP cubic registered NET: Registered protocol family 17 Sending DHCP requests ., OK IP-Config: Got DHCP answer from 0.0.0.0, my address is 158.218.100.193 IP-Config: Complete: device=eth0, addr=158.218.100.193, mask=255.255.255.0, gw=158.218.100.2, host=158.218.100.193, domain=am.dhcp.ti.com, nis-domain=(none), bootserver=0.0.0.0, rootserver=158.218.100.179, rootpath= Looking up port of RPC 100003/2 on 158.218.100.179 Looking up port of RPC 100005/1 on 158.218.100.179 VFS: Mounted root (nfs filesystem) on device 0:11. Freeing unused kernel memory: 120K freed starting pid 15, tty '': '/etc/rc.sysinit' Starting system... Mounting proc filesystem: done. Mounting other filesystems: mount: mounting sysfs on /sys failed: No such file o r directory done. Setting hostname 158.218.100.193: done. Bringing up loopback interface: done. Starting inetd: done. System started. starting pid 32, tty '/dev/console': '/bin/sh' / # SEVERE: Timed out after 15000ms SEVERE: com.ti.debug.engine.scripting.Target.run: Timed out after 15000ms DEBUG: Running of Linux core is successful... Loading and running BIOS application for notify Loading C:\project\linux-dsp\scripts\syslink_12_13\images\evm6488\notify_c6474_c ore0.x64P Loading C:\project\linux-dsp\scripts\syslink_12_13\images\evm6488\notify_c6474_c ore0.x64P C64XP_1A: GEL Output: Turn off cache segment Loading C:\project\linux-dsp\scripts\syslink_12_13\images\evm6488\notify_c6474_c ore1.x64P Loading C:\project\linux-dsp\scripts\syslink_12_13\images\evm6488\notify_c6474_c ore1.x64P C64XP_1A: GEL Output: Disable EDMA events ********IpcResetVector is 0x80c800 ********* C64XP_1B: GEL Output: Turn off cache segment C64XP_1B: GEL Output: Disable EDMA events SEVERE: Timed out after 5000ms SEVERE: com.ti.debug.engine.scripting.Target.run: Timed out after 5000ms DEBUG: Running of core0 successful... SEVERE: Timed out after 5000ms SEVERE: com.ti.debug.engine.scripting.Target.run: Timed out after 5000ms DEBUG: Running of core1 successful... ********IpcResetVector for notify is 0x80c800 ********* Type any key once syslink test is complete

3.b Running kernel sample module
1) Start IPC with remote cores spawn telnet 158.218.100.193 Trying 158.218.100.193... Connected to 158.218.100.193. Escape character is '^]'. 158.218.100.193 login: root /root # cd /var/local/c6474_12_13 /var/local/c6474_12_13 # insmod syslink.ko /var/local/c6474_12_13 # ./procmgrapp.exe 0x80c800 ProcMgrApp sample application Entered ProcMgrApp_startup_manual attaching procId = 0 ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState   state [0x4] After load: ProcMgr_getState    state [0x4] ProcMgr_start LOADCALLBACK passed [0x0] Ipc_CONTROLCMD_STARTCALLBACK: After start: ProcMgr_getState    state [0x4] Leaving ProcMgrApp_startup Entered ProcMgrApp_startup_manual attaching procId = 1 ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState    state [0x4] After load: ProcMgr_getState    state [0x4] ProcMgr_start LOADCALLBACK passed [0x0] Ipc_CONTROLCMD_STARTCALLBACK: After start: ProcMgr_getState state [0x4] Leaving ProcMgrApp_startup Press enter to continue and perform shutdown ... =====> Once test below is complete, press 2)insmod kernel sample module /var/local/c6474_12_13 #insmod notifyapp.ko The logs will be similar to the user sample case.
 * 1) telnet 158.218.100.193