-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathREADME
110 lines (86 loc) · 4.07 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
GPL LICENSE SUMMARY
Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
Copyright(c) 2013-2015 Wind River Systems, Inc. All rights reserved.
This program is free software; you can redistribute it and/or modify
it under the terms of version 2 of the GNU General Public License as
published by the Free Software Foundation.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
The full GNU General Public License is included in this distribution
in the file called LICENSE.GPL.
Contact Information:
Intel Corporation
-----------------------------------------------------------------------
DESCRIPTION
===========
Titanium Cloud AVP virtual NIC is a shared memory based high performance
networking device. Its potential maximum throughput is higher than other
standard virtual NIC devices (e.g., e1000, virtio). This package provides the
AVP Linux kernel device driver source. It can be compiled against most recent
Linux kernel distributions.
REQUIREMENTS
============
Compilation:
64-bit Linux Kernel
version >= 3.2
loadable module support
PCI device support
gcc compiler
DPDK v17.05+
VM Runtime:
AVP type virtual NIC
DELIVERABLE
===========
Titanium Cloud AVP Linux kernel device driver is delivered as source with the
required makefiles in a compressed tarball, such that it can be compiled for
the applicable guest linux distribution as an external kernel module.
COMPILE
=======
Clone the AVP kernel module driver repo and compile. This will produce the
wrs_avp.ko kernel module. The compilation of this driver depends on an
installed set of DPDK library headers. Refer to the documentation at
dpdk.org to build and install the DPDK software. The example below assumes
a DPDK installation at /usr/local/share/dpdk, and Linux kernel source headers
installed at make KSRC=/usr/src/linux-headers-$(uname -r).
mkdir -p /tmp/wrs
cd /tmp/wrs
git clone https://github.com/Wind-River/titanium-cloud-avp-kmod.git
cd titanium-cloud-avp-kmod
export RTE_SDK=/usr/local/share/dpdk
export RTE_TARGET=x86_64-native-linuxapp-gcc
make KSRC=/usr/src/linux-headers-$(uname -r)
INSTALL
=======
To install the wrs_avp.ko kernel module that was just built the following
command can be run from the same directory as the previous step. This will
install the kernel module to the default external module directory which is
specific to your system. Typically this is:
/lib/modules/$(uname -r)/extra
sudo make KSRC=/usr/src/linux-headers-$(uname -r) modules_install
sudo depmod -a
sudo modprobe wrs_avp
CONFIGURATION and USAGE
=======================
Loading the module with no parameters will allow the AVP kernel threads that
are dedicated to receiving packets to be affined to all online CPUs. If the
list of candidate CPUs must be restricted then the kthread_cpulist module
parameter can be used.
modprobe wrs_avp kthread_cpulist=0-2
HARDWARE OFFLOAD FEATURES
=======================
The WRS AVP kernel module supports the following hardware offload features.
Unless stated otherwise, all other offload capabilities are not supported and
should not be specified.
1. VLAN insert and strip.
This feature allows the guest and host to exchange VLAN tagging information
in packet metadata rather than modifying packet headers to add and remove
VLAN tagging information. In many circumstances this capability reduces CPU
cost associated to processing VLAN tagged packets at both the guest and host
levels. The ability to enable or disable this feature via ethtool is
currently not supported. It is enabled by default when the host device
reports that it supports VLAN offload capabilities.