SysLink for c6x

Introduction
This document is the Release Notes document for SYS/Link.

SYS/Link, also referred to as SysLink, is a software platform that simplifies the development of embedded applications in which a general-purpose microprocessor (GPP) controls and communicates with one or more processors (can be microprocessors / DSPs). SysLink provides control and communication paths between GPP OS threads and SYS/BIOS tasks.The SysLink product provides software connectivity between multiple processors. Each processor may run either an HLOS such as Linux, WinCE, Symbian etc. or a Real Time Operating System such as SYS/BIOS™ or PrOS. Based on specific characteristics of the Operating system running on the processor, the software architecture of each module shall differ.Current SysLink supports Linux on HLOS and SYS/BIOS on RTOS side.The associated Porting kit can be used to port SysLink to various OSes.

The SysLink product provides the following services to frameworks and applications:
 * Processor Manager
 * Inter-Processor Communication
 * Utility modules

SYS/BIOS operating system is expected to be running on the slaves with all of these platforms.

This is an early adopter release with limited validation. This release is provided to give an early look at SysLink to framework/application developers for the above devices.

The SysLink implementation is currently in-progress. Design changes are currently being evaluated and implemented for Linux-side sources to move the implementation of most modules (except Notify) to user-space. Several advanced features such as Dynamic Memory Mapping, Dynamic System Memory Management, Slave Error Handling, Advanced Power Management are not currently implemented and are planned for future releases.

An Alpha version of the SysLink port for c64x platforms C6474(Faraday), C6472(Tomahawk) is available. A beta release is now available which support c66x platforms C6670 & C6678. SysLink-c6x is an extension of SysLink for the above platforms. To read more refer [SysLink].

Release Components
The SysLink release package contains following components:

Prerequisites

 * A Linux Host machine with CG TOOLS/CCS installed along with linux-c6x sdk. It is assumed that a successful linux-c6x product build is complete on this host.

Generic features
Syslink provides user and kernel side APIs for:


 * Processor Manager
 * Inter-Processor Communication protocols

In this release, following HLOS modules are supported:


 * Ipc
 * Notify
 * MessageQ
 * ListMP
 * SharedRegion
 * MultiProc
 * GateMP
 * NameServer
 * HeapBufMP
 * HeapMemMP

Ipc
The Ipc module initializes all IPC and SysLink components on the behalf of the application. It provides a single platform-specific location to implement system level module initialization and configuration. It also allows application to seamlessly configure the SysLink and IPC components.

Ipc module on multi-processors system resides on each processor and enables synchronizing between any two cores using the Ipc module for setting up the system configuration.

The Ipc is an optional module. The application writer can choose to use or not use Ipc module for configuring the modules. If not used, all IPC modules must be individually configured by the application writer. However, note that due to the complexity of this configuration, it is advisable to use the Ipc module for system configuration and synchronization.

Processor Manager module
The Processor Manager on a master processor provides control functionality for a slave device.

Allows Host application to attach an IPC instance to communicate with a remote core. Unlike in other platforms, ProcMgr in c6x port assumes that Slave cores are loaded and run using other mechanisms. i.e ProcMgr doesn't support loading and starting of slave cores. It supports minimum functionality to attach and start an IPC instance with a remote core. Once the slave cores are running, the application invokes the LOADCALLBACK and STARTCALLBACK control command IOCTLs. The slave cores's Ipc_ResetVector is passed in as an argument to LOADCALLBACK IOCTL command. This will create an IPC instance with the slave core. The IPC is started by invoking the STARTCALLBACK IOCTL. It can then invoke other IPC IOCTLs.

Notify
The Notify Manager manages the multiplexing/de-multiplexing of software interrupts over hardware interrupts. The Notify module uses notify drivers that do the actual management of the callback functions and interface to the hardware

MessageQ
The MessageQ module supports the structured sending and receiving of variable length messages. This module can be used for homogeneous or heterogeneous multi-processor messaging.

MessageQ provides more sophisticated messaging than other modules. It is typically used for complex situations such as multi-processor messaging. The following are key features of the MessageQ module:


 * Writers and readers can be relocated to another processor with no runtime code changes.
 * Timeouts are allowed when receiving messages.
 * Readers can determine the writer and reply back.
 * Messages can reside on any message queue.
 * Supports zero-copy transfers.
 * Notification mechanism is specified by application.

GateMP
This module provides the design for implementing multi-processor critical section gates with remote and local protection. The type of gate used, whether s/w (GatePeterson) or h/w (GateHwSpinlock/GateHwSem/GateAAMonitor) depends on the device capabilities.

ListMP
The ListMP Manager is a doubly linked-list based module designed to be used in a multi-processor environment. It provides a way for multiple processors to create, access, and manipulate a link list in shared memory.

HeapBufMP/HeapMemMP
The HeapBufMP memory manager provides functions to allocate and free storage from a heap of type HeapBufMP which inherits from IHeap. HeapBufMP manages a single fixed-size buffer, split into equally sized allocable blocks. The HeapBufMP module is intended as a very fast memory manager which can only allocate blocks of a single size. It is ideal for managing a heap that is only used for allocating a single type of object, or for objects that have very similar sizes.

The HeapMemMP module is a variable size multi-processor memory manager, protected with a multi-processor GateMP.

SharedRegion
The SharedRegion module is designed to be used in a multi-processor environment where there are memory regions that are shared and accessed across different processors.

This module creates a shared memory region lookup table. This lookup table contains the processor's view for every shared region in the system. Each processor has its own lookup table. Each processor's view of a particular shared memory region can be determined by the same table index across all lookup tables. Each table entry is a base and length pair. During runtime, this table along with the shared region pointer is used to do a quick address translation.

If specified in configuration, a multi-processor heap can be created in the SharedRegion, which can be used by applications for their shared memory requirements. The heap created is of type HeapMemMP and is protected by the system gate.

List
The List module makes available a set of functions that manipulate List objects accessed through handles of type List_Handle. Each List contains a linked sequence of zero or more elements referenced through variables of type List_Elem, which are typically embedded as the first field within a structure.

MultiProc
Many multi-processor modules have the concept of processor id. MultiProc centralizes the processor id management into one module.

Generally to improve performance and to minimize data footprint, MultiProc provides information about the remote processors.

NameServer
The NameServer module manages local name/value pairs that enable an application and other modules to store and retrieve values based on a name. The module supports different lengths of values. The add/get functions are for variable length values. The NameServer module can be used in a multiprocessor system. The module communicates to other processors via the Remote driver.

Sample applications
Sample applications to demonstrate usage of modules have been provided for:
 * Notify
 * MessageQ
 * HeapBufMP
 * HeapMemMP
 * ListMP
 * SharedRegion
 * GateMP

What's not supported

 * Unsupported features in this release:
 * ProcMgr
 * FrameQ
 * RingIO
 * HeapMultiBufMP module

Running multi-core sample applications
The multi-core applications has two components. One part of the application runs on the Linux Host (referred to as HLOS sample application) and the other part runs on the slave cores (referred  to as RTOS or SyS/BIOS IPC sample application). Host is also known as the master core and others as slave cores. On each core, the application initializes all resources used by the application and also initializes the IPC. It then execute a set of APIs calls to  invokes the services of the module under test. So running multi-core sample application involves running of the above two sample applications. The SyS/BIOS IPC sample application calls APIs from IPC package. The master core runs SysLink sample application under Linux. IPC module of both sample applications communicate using Shared Memory as transport and IPC hardware interrupt. The application on the slave core waits for the master core to initiate the test.

1. Test configuration
On C6670 and C6678 EVMs, the Linux Host runs on Core0 and BIOS applications run on Core1 - CoreN to demonstrate IPC between Host and BIOS cores. Where N is 3 for C6670 and 7 for C6678. Loading of slave cores is done using the mcoreloader (available under /usr/bin when mcsdk-demo-root root fs is built). The slave cores can be loaded manually as well using ccs. The IPC requires shared memory between cores as transport. On C6670 and C6678, the SharedRegion 0 is on MSMC and SR1 on DDR. This requires bootarg variable mem=256M to be set in kernel bootargs to reserve the upper 256M for SysLink shared region1 and other application use.

2.a Running Linux User land sample applications.
First step is to load and run the the Linux on the Linux Host core (core 0). Linux can be loaded either through CCS or through tftp. SyS/BIOS IPC sample application executable are in elf formats and are named as  .xe66. So pick executable with core_id 1 for Core1 , core_id 2 for Core2 and so forth. The elf loader, mcoreloader under /usr/bin is used for this purpose. Assume the SysLink .ko files, exe and BIOS/IPC sample exe files are copied to /opt/ syslink_evmc6678.el folder. Scripts are provided under linux-c6x-project/scripts/syslink for automating this. Copy the scripts to /opt/syslink_evmc <6670/6678>.el folder. The scripts are named as follows:- _test__core.sh.

To run MessageQ application on C6678, run ./messageq_app_test_8_core.sh

To run MessageQ application on C6670, run

./messageq_app_test_4_core.sh

Make sure that the Ipc_ResetVector used in the script matches with the application map file. Application map files are available under product/syslink_evmc<6670/6678>.el/map folder.

To get the Ipc_ResetVector values on each core for Notify, do cd ~/my-linux-c6x/product/syslink_evmc<6670/6678>.el/map grep Ipc_ResetVector notify*.map

2.b Running kernel module sample
Scripts are provided under linux-c6x-project/scripts/syslink for automating this. Copy the scripts to /opt/syslink_evmc <6670/6678>.el folder. The scripts are named as follows:- module_test__core.sh. Also a procmgr script is provided to load and run the slave cores with BIOS IPC samples. There is one procmgr script per application and is named as procmgr_load_ _ _core.sh. For C6678, use procmgr_load_ _8_core.sh.

Telnet session #1 cd /opt/syslink_evmc6678.el ./procmgr_load_notifyapp_8_core.sh

Telnet session #2 cd /opt/syslink_evmc6678.el ./messageq_module_test_8_core.sh

Make sure that the Ipc_ResetVector used in the procmgr script matches with the application map file. Application map files are available under product/syslink_evmc<6670/6678>.el/map folder.

=
3.a.1 Sample logs on Linux Host for notify running on C6678

/opt/syslink_evmc6678.el # ./messageq_app_test_8_core.sh Beginning of MessageQ sample application run insmod syslink.ko SysLink version : 02.00.00.68_beta1 SysLink module created on Date:Jun 21 2011 Time:10:12:50 Entered KnlUtilsDrv_initializeModule traceMask value: 0x0 Leaving KnlUtilsDrv_initializeModule 0x0 Loading and running slave core 1 ELF: ELF ELF file header entry point: 8142c0 Program entry address: 0x8142c0 Program entry address not 10bit aligned trying to use reset vector table Reset vector address: 0x828800 Started Program execution on core: 1 Loading and running slave core 2 ELF: ELF ELF file header entry point: 8142c0 Program entry address: 0x8142c0 Program entry address not 10bit aligned trying to use reset vector table Reset vector address: 0x828800 Started Program execution on core: 2 Loading and running slave core 3 ELF: ELF ELF file header entry point: 8142c0 Program entry address: 0x8142c0 Program entry address not 10bit aligned trying to use reset vector table Reset vector address: 0x828800 Started Program execution on core: 3 Loading and running slave core 4 ELF: ELF ELF file header entry point: 8142c0 Program entry address: 0x8142c0 Program entry address not 10bit aligned trying to use reset vector table Reset vector address: 0x828800 Started Program execution on core: 4 Loading and running slave core 5 ELF: ELF ELF file header entry point: 8142c0 Program entry address: 0x8142c0 Program entry address not 10bit aligned trying to use reset vector table Reset vector address: 0x828800 Started Program execution on core: 5 Loading and running slave core 6 ELF: ELF ELF file header entry point: 8142c0 Program entry address: 0x8142c0 Program entry address not 10bit aligned trying to use reset vector table Reset vector address: 0x828800 Started Program execution on core: 6 Loading and running slave core 7 ELF: ELF ELF file header entry point: 8142c0 Program entry address: 0x8142c0 Program entry address not 10bit aligned trying to use reset vector table Reset vector address: 0x828800 Started Program execution on core: 7 Running messageq User land sample application MessageQApp sample application MessageQApp_startup entered Entered SysLinkSamples_startup SysLinkSamples_osStartup SysLinkSamples_setToRunProcIds Loading and starting procId [1] with [(null)] Entered ProcMgrApp_startup ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After Ipc_loadcallback: ProcMgr_getState state [0x4] After Ipc_startcallback: ProcMgr_getState state [0x4] ProcMgr_close status: [0x97d2000] Leaving ProcMgrApp_startup ProcMgrApp_startup status [0] Loading and starting procId [2] with [(null)] Entered ProcMgrApp_startup ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After Ipc_loadcallback: ProcMgr_getState state [0x4] After Ipc_startcallback: ProcMgr_getState state [0x4] ProcMgr_close status: [0x97d2000] Leaving ProcMgrApp_startup ProcMgrApp_startup status [0] Loading and starting procId [3] with [(null)] Entered ProcMgrApp_startup ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After Ipc_loadcallback: ProcMgr_getState state [0x4] After Ipc_startcallback: ProcMgr_getState state [0x4] ProcMgr_close status: [0x97d2000] Leaving ProcMgrApp_startup ProcMgrApp_startup status [0] Loading and starting procId [4] with [(null)] Entered ProcMgrApp_startup ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After Ipc_loadcallback: ProcMgr_getState state [0x4] After Ipc_startcallback: ProcMgr_getState state [0x4] ProcMgr_close status: [0x97d2000] Leaving ProcMgrApp_startup ProcMgrApp_startup status [0] Loading and starting procId [5] with [(null)] Entered ProcMgrApp_startup ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After Ipc_loadcallback: ProcMgr_getState state [0x4] After Ipc_startcallback: ProcMgr_getState state [0x4] ProcMgr_close status: [0x97d2000] Leaving ProcMgrApp_startup ProcMgrApp_startup status [0] Loading and starting procId [6] with [(null)] Entered ProcMgrApp_startup ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After Ipc_loadcallback: ProcMgr_getState state [0x4] After Ipc_startcallback: ProcMgr_getState state [0x4] ProcMgr_close status: [0x97d2000] Leaving ProcMgrApp_startup ProcMgrApp_startup status [0] Loading and starting procId [7] with [(null)] Entered ProcMgrApp_startup ProcMgr_attach status: [0x97d2000] After attach: ProcMgr_getState state [0x4] After Ipc_loadcallback: ProcMgr_getState state [0x4] After Ipc_startcallback: ProcMgr_getState state [0x4] ProcMgr_close status: [0x97d2000] Leaving ProcMgrApp_startup ProcMgrApp_startup status [0] Leaving MessageQApp_startup 0 Entered MessageQApp_execute MessageQApp_threadHandler entered Registering heapId 0 with MessageQ for procId: 1 MessageQ_create name MSGQ_01 status [0x0] : procId [1] Sending synchronizaion notification to ProcId: 1 Sent synchronizaion notification to ProcId: 1 MessageQ_open Status [0x0] : procId [1] MessageQApp_queueId [0x10000] : procId [1] Sending a message #100 to 1 Sending a message #200 to 1 Sending a message #300 to 1 Sending a message #400 to 1 Sending a message #500 to 1 Sending a message #600 to 1 Sending a message #700 to 1 Sending a message #800 to 1 Sending a message #900 to 1 Sending a message #1000 to 1 Leaving MessageQApp_threadHandler 0 MessageQApp_threadHandler entered Registering heapId 0 with MessageQ for procId: 2 MessageQ_create name MSGQ_02 status [0x0] : procId [2] Sending synchronizaion notification to ProcId: 2 Sent synchronizaion notification to ProcId: 2 MessageQ_open Status [0x0] : procId [2] MessageQApp_queueId [0x20000] : procId [2] Sending a message #100 to 2 Sending a message #200 to 2 Sending a message #300 to 2 Sending a message #400 to 2 Sending a message #500 to 2 Sending a message #600 to 2 Sending a message #700 to 2 Sending a message #800 to 2 Sending a message #900 to 2 Sending a message #1000 to 2 Leaving MessageQApp_threadHandler 0 MessageQApp_threadHandler entered Registering heapId 0 with MessageQ for procId: 3 MessageQ_create name MSGQ_03 status [0x0] : procId [3] Sending synchronizaion notification to ProcId: 3 Sent synchronizaion notification to ProcId: 3 MessageQ_open Status [0x0] : procId [3] MessageQApp_queueId [0x30000] : procId [3] Sending a message #100 to 3 Sending a message #200 to 3 Sending a message #300 to 3 Sending a message #400 to 3 Sending a message #500 to 3 Sending a message #600 to 3 Sending a message #700 to 3 Sending a message #800 to 3 Sending a message #900 to 3 Sending a message #1000 to 3 Leaving MessageQApp_threadHandler 0 MessageQApp_threadHandler entered Registering heapId 0 with MessageQ for procId: 4 MessageQ_create name MSGQ_04 status [0x0] : procId [4] Sending synchronizaion notification to ProcId: 4 Sent synchronizaion notification to ProcId: 4 MessageQ_open Status [0x0] : procId [4] MessageQApp_queueId [0x40000] : procId [4] Sending a message #100 to 4 Sending a message #200 to 4 Sending a message #300 to 4 Sending a message #400 to 4 Sending a message #500 to 4 Sending a message #600 to 4 Sending a message #700 to 4 Sending a message #800 to 4 Sending a message #900 to 4 Sending a message #1000 to 4 Leaving MessageQApp_threadHandler 0 MessageQApp_threadHandler entered Registering heapId 0 with MessageQ for procId: 5 MessageQ_create name MSGQ_05 status [0x0] : procId [5] Sending synchronizaion notification to ProcId: 5 Sent synchronizaion notification to ProcId: 5 MessageQ_open Status [0x0] : procId [5] MessageQApp_queueId [0x50000] : procId [5] Sending a message #100 to 5 Sending a message #200 to 5 Sending a message #300 to 5 Sending a message #400 to 5 Sending a message #500 to 5 Sending a message #600 to 5 Sending a message #700 to 5 Sending a message #800 to 5 Sending a message #900 to 5 Sending a message #1000 to 5 Leaving MessageQApp_threadHandler 0 MessageQApp_threadHandler entered Registering heapId 0 with MessageQ for procId: 6 MessageQ_create name MSGQ_06 status [0x0] : procId [6] Sending synchronizaion notification to ProcId: 6 Sent synchronizaion notification to ProcId: 6 MessageQ_open Status [0x0] : procId [6] MessageQApp_queueId [0x60000] : procId [6] Sending a message #100 to 6 Sending a message #200 to 6 Sending a message #300 to 6 Sending a message #400 to 6 Sending a message #500 to 6 Sending a message #600 to 6 Sending a message #700 to 6 Sending a message #800 to 6 Sending a message #900 to 6 Sending a message #1000 to 6 Leaving MessageQApp_threadHandler 0 MessageQApp_threadHandler entered Registering heapId 0 with MessageQ for procId: 7 MessageQ_create name MSGQ_07 status [0x0] : procId [7] Sending synchronizaion notification to ProcId: 7 Sent synchronizaion notification to ProcId: 7 MessageQ_open Status [0x0] : procId [7] MessageQApp_queueId [0x70000] : procId [7] Sending a message #100 to 7 Sending a message #200 to 7 Sending a message #300 to 7 Sending a message #400 to 7 Sending a message #500 to 7 Sending a message #600 to 7 Sending a message #700 to 7 Sending a message #800 to 7 Sending a message #900 to 7 Sending a message #1000 to 7 Leaving MessageQApp_threadHandler 0 Leaving MessageQApp_execute Entered MessageQApp_shutdown Shutting down procId [1] Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x97d2000] ProcMgr_detach status: [0x6a85000] After detach: ProcMgr_getState state [0x0] ProcMgr_close status: [0x0] Leaving ProcMgrApp_shutdown ProcMgrApp_shutdown status [0] Shutting down procId [2] Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x0] ProcMgr_detach status: [0x6a85000] After detach: ProcMgr_getState state [0x0] ProcMgr_close status: [0x0] Leaving ProcMgrApp_shutdown ProcMgrApp_shutdown status [0] Shutting down procId [3] Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x0] ProcMgr_detach status: [0x6a85000] After detach: ProcMgr_getState state [0x0] ProcMgr_close status: [0x0] Leaving ProcMgrApp_shutdown ProcMgrApp_shutdown status [0] Shutting down procId [4] Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x0] ProcMgr_detach status: [0x6a85000] After detach: ProcMgr_getState state [0x0] ProcMgr_close status: [0x0] Leaving ProcMgrApp_shutdown ProcMgrApp_shutdown status [0] Shutting down procId [5] Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x0] ProcMgr_detach status: [0x6a85000] After detach: ProcMgr_getState state [0x0] ProcMgr_close status: [0x0] Leaving ProcMgrApp_shutdown ProcMgrApp_shutdown status [0] Shutting down procId [6] Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x0] ProcMgr_detach status: [0x6a85000] After detach: ProcMgr_getState state [0x0] ProcMgr_close status: [0x0] Leaving ProcMgrApp_shutdown ProcMgrApp_shutdown status [0] Shutting down procId [7] Entered ProcMgrApp_shutdown Ipc_control Ipc_CONTROLCMD_STOPCALLBACK status: [0x0] ProcMgr_detach status: [0x6a85000]Entered KnlUtilsDrv_finalizeModule After detach: Assertion at Line no: 446 in /sim/scratch_a0868495 /mcsdk-2.0-alpha1/Build/sysli nk_evmc6678.el/ti/syslink/utils/hlos/knl/Linux/../../../../../.. /ti/syslink/ipc/hlos/knl/Mess ageQ.c: (MessageQ_module->queues [i] == NULL) : failed ProcMgr_getStateAssertion at Line no: 446 in /sim/scratch_a0868495 /mcsdk-2.0-alpha1/Build/sys link_evmc6678.el/ti/syslink/utils/hlos/knl/Linux/../../../../../.. /ti/syslink/ipc/hlos/knl/Me ssageQ.c: (MessageQ_module->queues [i] == NULL) : failed state [0x0Assertion at Line no: 446 in /sim/scratch_a0868495 /mcsdk-2.0-alpha1/Build/sysli nk_evmc6678.el/ti/syslink/utils/hlos/knl/Linux/../../../../../.. /ti/syslink/ipc/hlos/knl/Mess ageQ.c: (MessageQ_module->queues [i] == NULL) : failed ] ProcMgr_closeAssertion at Line no: 446 in /sim/scratch_a0868495 /mcsdk-2.0-alpha1/Build/syslin k_evmc6678.el/ti/syslink/utils/hlos/knl/Linux/../../../../../.. /ti/syslink/ipc/hlos/knl/Messa geQ.c: (MessageQ_module->queues [i] == NULL) : failed status: [0x0] Assertion at Line no: 446 in /sim/scratch_a0868495/mcsdk-2.0-alpha1 /Build/syslink_evmc6678.el /ti/syslink/utils/hlos/knl/Linux/../../../../../../ti/syslink /ipc/hlos/knl/MessageQ.c: (Messa geQ_module->queues [i] == NULL) : failed Leaving ProcMgrAAssertion at Line no: 446 in /sim/scratch_a0868495 /mcsdk-2.0-alpha1/Build/sys link_evmc6678.el/ti/syslink/utils/hlos/knl/Linux/../../../../../.. /ti/syslink/ipc/hlos/knl/Me ssageQ.c: (MessageQ_module->queues [i] == NULL) : failed pp_shutdown ProAssertion at Line no: 446 in /sim/scratch_a0868495/mcsdk- 2.0-alpha1/Build/syslink_evmc6678 .el/ti/syslink/utils/hlos/knl/Linux/../../../../../../ti/syslink /ipc/hlos/knl/MessageQ.c: (MessageQ_module->queues [i] == NULL) : failed cMgrApp_shutdown status [0] SysLinkSamples_shutdown SysLinkSamples_osShutdown Leaving MessageQApp_shutdown rmmod syslink.ko Leaving KnlUtilsDrv_finalizeModule 0x0 MessageQ sample application run is complete

Host Support
This release has been validated on the following host machines:


 * Red Hat Enterprise Linux 4 for linux-c6x sdk builds
 * Windows XP SP2 for CCS v5 Installation

Validation Information
This engineering release has been validated using the following configurations:

Known Issues
* Shared Memory transport is only verified using MSMC memory. * Assert log message seen when doing rmmod syslink.ko after running messageQ sample application * SysLink - GateMP, on RTOS side sendEvent fail log message is seen (but sample application runs to completion) * Assert log message seen when running HeapMemMP sample application * Functionality is verified only in Little Endian mode * Not verified with TI gcc wrapper tool (CGT based)

SysLink-c6x Quick Links
SysLink 02.00.00.68_c6x_beta1 Install Guide SysLink-c6x FAQ