пятница, 29 декабря 2017 г.

Windows 2012/2016 Networking - VLAN + Team+ Hyper-V vSwitch

1. Как создавать Team VLAN интерфейсы в Windows 2012/2016 (т.н VLAN Mode):
802.1q парсится на уровне Team-интерфейса (метод не поддерживается для Hyper-V vSwitch):

https://community.mellanox.com/docs/DOC-1845


2. Team Interfaces
There are different ways of interfacing with the team:
  • Default mode: all traffic from all VLANs is passed through the team
  • VLAN mode: Any traffic that matches a VLAN ID/tag is passed through.  Everything else is dropped.
Inbound traffic passes through to one team interface at once.

The only supported configuration for Hyper-V is shown above: Default mode passing through all traffic t the Hyper-V Switch.  Do all the VLAN tagging and filtering on the Hyper-V SwitchYou cannot mix other interfaces with this team – the team must be dedicated to the Hyper-V Switch.  REPEAT: This is the only supported configuration for Hyper-V.
A new team has one team interface by default. 
Any team interfaces created after the initial team creation must be VLAN mode team interfaces (bound to a VLAN ID).  You can delete these team interfaces.
Get-NetAdapter: Get the properties of a team interface
Rename-NetAdapter: rename a team interface
Team Members
  • Any physical ETHERNET adapter with a Windows Logo (for stability reasons and promiscuous mode for VLAN trunking) can be a team member.
  • Teaming of InfiniBand, Wifi, WWAN not supported.
  • Teams made up of teams not supported.
You can have team members in active or standby mode.
содрано отсюда: http://www.aidanfinn.com/?p=12924
3. Официальное чтиво:
https://gallery.technet.microsoft.com/Windows-Server-2016-839cb607/view/Discussions#content

1.1.1      Using VLANs

VLANs are a powerful tool that solves many problems for administrators. There are a few rules for using VLANs that will help to make the combination of VLANs and NIC Teaming a very positive experience.
1)       Anytime you have NIC Teaming enabled, the physical switch ports the host is connected to should be set to trunk (promiscuous) mode. The physical switch should pass all traffic to the host for filtering without modification.[1]
1)      Anytime you have NIC Teaming enabled, you must not set VLAN filters on the NICs using the NICs advanced properties settings. Let the teaming software or the Hyper-V switch (if present) do the filtering.
When using SET all VLAN settings must be configured on the VM’s switch port. 

1.1.1.1     VLANs in a Hyper-V host

This section applies only to NIC Teaming.  It does not apply to SET as a SET team has no team interfaces on which a VLAN may be enabled.
In a Hyper-V host VLANs should be configured only in the Hyper-V switch, not in the stand-alone NIC Teaming software. Configuring team interfaces with VLANs can easily lead to VMs that are unable to communicate on the network due to collisions with VLANs assigned in the Hyper-V switch.  Consider the following NIC Teaming example:



Figure 6 - VLAN misconfiguration (stand-alone NIC Teaming)

Figure 6 shows a common misconfiguration that occurs when administrators try to use team interfaces for VLAN support and also bind the team to a Hyper-V switch.  In this case VM C will never receive any inbound traffic because all the traffic destined for VLAN 17 is taken out at the teaming module.  All traffic except traffic tagged with VLAN 17 will be forwarded to the Hyper-V switch, but VM C’s inbound traffic never arrives.  This kind of misconfiguration has been seen often enough for Microsoft to declare this kind of configuration, i.e., VLANs exposed at the teaming layer while the team is bound to the Hyper-V switch, unsupported.  Repeat: If a team is bound to a Hyper-V switch the team MUST NOT have any VLAN-specific team interfaces exposed.  This is an unsupported configuration 


1.1.1.1     VLANs in a Hyper-V VM

1)      The preferred method of supporting multiple VLANs in a VM is to provide the VM multiple ports on the Hyper-V switch and associate each port with a VLAN. Never team these ports in the VM as it will certainly cause communication problems.
2)      If the VM has multiple SR-IOV VFs make sure they are on the same VLAN before teaming them in the VM. It’s easily possible to configure the different VFs to be on different VLANs and, like in the previous case, it will certainly cause communication problems.
3)      The only safe way to use VLANs with NIC Teaming in a guest is to team Hyper-V ports that are
a.      Each connected to a different external Hyper-V switch, and
b.      Each configured to be associated with the same VLAN (or all associated with untagged traffic only).
TIP: If you must have more than one VLAN exposed into a guest OS consider renaming the ports in the guest to indicate what the VLAN is. E.g., if the first port is associated with VLAN 12 and the second port is associated with VLAN 48, rename the interface Ethernet to be EthernetVLAN12 and the other to be EthernetVLAN48.  Renaming interfaces is easy using the Windows PowerShell Rename-NetAdapter cmdlet or by going to the Network Connections panel in the guest and renaming the interfaces

[1] Advanced users may choose to restrict the switch ports to only passing the VLANs present on the host.  While this may slightly improve performance in networks with many VLANs that the local host doesn’t access, it risks creating difficult to diagnose problems when, for example, a VM is migrated to a host and it uses a VLAN not previously present on the destination host.

пятница, 15 декабря 2017 г.

Тюнинг потребления ресурсов в FortiOS

Несмотря на то, что в статье описана древняя версия FortiOS 4.0, все сказанное актуально и для более старших версий FortiOS, особенно под большой нагрузкой и большом количестве включенных опций:

http://kb.fortinet.com/kb/documentLink.do?popup=true&externalID=FD33078&languageId=
http://kb.fortinet.com/kb/documentLink.do?externalID=FD33103


Вывод - при ограниченных ресурсах нужно включать фичи с умом, чтобы система не стала сама себя защищать при помощи Kernel Conserve Mode

среда, 6 декабря 2017 г.

FortiOS versions, builds and dates

Version 5

MR2
Build 0688, P4 (07/22/2015)
Build 0670, P3 (05/18/2015)
Build 0642, P2 (11/18/2014)
Build 0618, P1 (09/15/2014)
Build 0589, GA (06/14/2014)
GA
Build 0318, P12 (05/15/2015)
Build 0311, P11 (01/23/2015)
Build 0305, P10 (12/16/2015)
Build 0292, P9 (08/01/2014)
Build 0291, P8 (07/29/2014)
Build 3608, P7 (04/10/2014) See note 1 at the bottom
Build 0271, P6 (01/25/2014)
Build 0252, P5 (11/01/2013)
Build 0228, P4 (08/08/2013)
Build 0208, P3 (06/03/2013)
Build 0198, P3, Beta 1 (05/22/2013) pulled by Fortinet
Build 0179, P2 (03/2013)
Build 0147, P1 (12/2012)
Build 0128, First release (11/2012)

Version 4

MR 3 End of Support Date for Version 4.0 MR3 = March 19, 2014 (unless device does not support FortiOS version 5.0)
Build 0689, P18 (08/06/2014)
Build 0688, P17 (07/14/2014)
Build 0686, P16 (07/03/2014)
Build 0672, P15 (09/05/2013)
Build 0665, P14 (05/17/2013)
Build 0664, P13 (04/30/2013) pulled by Fortinet
Build 0656, P12 (02/27/2013)
Build 0646, P11 (11/2012)
Build 0639, P10 (09/2012)
Build 0637, P9 (08/22/2012)
Build 0632, P8 (07/05/2012)
Build 0535, P7 (05/2012)
Build 0521, P6 (03/2012)
Build 0513, P5 (02/2012)
Build 0511, P4 (01/2012)
Build 0496, P3 (11/2011)
Build 0482, P2 (09/2011)
Build 0458, P1 (06/2011)
Build 0441, First release (03/18/2011)
MR 2 (End of Support Date for Version 4.0 MR2 = April 1, 2013)
Build 0356, P15 (02/21/2013)
Build 0353, P14 (01/09/2013)
Build 0349, P13 (09/04/2012)
Build 0346, P12 (06/06/2012)
Build 0342, P11 (02/27/2012)
Build 3118, P10 (01/17/2012) with MS hotfix
Build 0338, P10 (12/06/2011)
Build 0334, P9 (10/2011)
Build 0328, P8 (07/2011)
Build 0324, P7 (05/2011)
Build 0320, P6 (04/2011)
Build 0315, P5 (04/2011)
Build 0313, P4 (03/2011)
Build 0303, P3 (12/14/2010)
Build 0291, P2 (08/2010)
Build 0279, P1 (05/2010)
Build 0272, First release
MR 1 (End of Support Date for Version 4.0 MR1 = August 24, 2012)
Build 0217, P10 (06/16/2011)
Build 0213, P9 (01/28/2011) pulled by Fortinet
Build 0209, P8 (09/29/2010) pulled by Fortinet
Build 0207, P7 pulled by Fortinet
Build 0205, P6 pulled by Fortinet
Build 0204, P5 pulled by Fortinet
Build 0196, P4 pulled by Fortinet
Build 0194, P3 pulled by Fortinet
Build 0192, P2 pulled by Fortinet
Build 0185, P1 pulled by Fortinet
Build 0178, First release
GA (End of Support Date for Version 4.0 = February 24, 2012)
Build 0113, P4 (12/02/2009)
Build 0106, P3 (06/16/2009)
Build 0099, P2 (04/07/2009)
Build 009x, P1 (2009) pulled by Fortinet
Build 0092, First release (02/20/2009)

Version 3

MR 7 (End of Support Date for Version 3.0 MR7 = July 18, 2011)
Build 0754, P10 (10/27/2010)
Build 0753, P9 (02/17/2010)
Build 0752, P8 (12/23/2009)
Build 0750, P7 (10/09/2009)
Build 0744, P6 (06/30/2009)
Build 0741, P5 (04/08/2009)
Build 0740, P4
Build 0737, P3 (03/03/2009)
Build 0733, P2 (11/21/2008)
Build 0730, P1 (09/19/2008)
Build 0726, First release (07/16/2008)
MR 6 (End of Support Date for Version 3.0 MR6 = February 4, 2011)
Build 0678, P6
Build 0677, P5
Build 0673, P4 (10/27/2008)
Build 0670, P3 (07/29/2008)
Build 0668, P2 (05/14/2008)
Build 0662, P1 (03/17/2008)
Build 0660, First release (02/01/2008)
MR 5 (End of Support Date for Version 3.0 MR5 = July 3, 2010)
Build 0576, P7
Build 0575, P6
Build 0574, P5 (02/20/2008)
Build 0572, P4 (11/26/2007)
Build 0568, P3 (10/18/2007)
Build 5101, P2 (09/05/2007) Memory Optimized for smaller models
Build 0565, P2 (09/05/2007)
Build 0564, P1 (08/17/2007)
Build 0559, First release
Build 0552, CR3
Build 0547, CR2
MR 4 (End of Support Date for Version 3.0 MR4 = December 29, 2009)
Build 0483, P5 (07/03/2007)
Build 0480, P4 (03/30/2007)
Build 0479, P3
Build 0477, P2
Build 0475, P1
Build 0474, First release
Build 0468, CR2
MR 3 (End of Support Date for Version 3.0 MR3 = October 2, 2009)
Build 0418, P14
Build 0416, P12
Build 8552, P11 (09/01/2007) Memory Optimized for smaller models
Build 0416, P11 (09/01/2007)
Build 8509, P10 (07/05/2007) Memory Optimized for smaller models
Build 0415, P10 (07/05/2007)
Build 8468, P9 (05/04/2007) Memory Optimized for smaller models
Build 0413, P9 (05/04/2007)
Build 0411, P8 (03/30/2007)
Build 0410, P7 (03/08/2007)
Build 0406, P6 (01/26/2007)
Build 0405, P5 (01/05/2007)
Build 0404, P4
Build 0403, P3 (11/06/2006)
Build 0402, P2
Build 0401, P1
Build 0400, First release (10/02/2006)
Build 0394, CR2
Build 0388, CR1
MR 2 (The versions below are beyond end of support dates)
Build 0319
Build 0318 (06/30/2006)

Version 2.8

MR 12
Build 520, P1
Build 519
MR 11
Build 490

 
NOTES
Note 1: These are all patches for the Heartbleed SSL bug, based on build 0271 (P6)
  • Build 4429 for FGT100D, FGT140D, FGT140D_POE
  • Build 4439 for FGT 280D_POE
  • Build 3483 for FGT 3600C

понедельник, 4 декабря 2017 г.

Как перевести SSL VPN на Fortigate c TCP на UDP (DTLS)

config vpn ssl settings
    set dtls-tunnel enable/disable
end

неплохая статья о проблемах решений TCP over TCP (что в частном случае и представляет собой SSL VPN (IP over HTTPS)):

http://sites.inka.de/bigred/devel/tcp-tcp.html

понедельник, 27 ноября 2017 г.

Как включить белый список HTTPS ресурсов, обновляемый Fortiguard для исключения SSL Inspection в Fortigate

config firewall ssl-ssh-profile
edit deep-inspection
set whitelist enable
end

http://help.fortinet.com/fos50hlp/54/Content/FortiOS/fortigate-security-profiles-54/SSL_SSH_Inspection/Secure%20whitelist%20database.htm

Как поменять сертификат страницы с ошибкой при Full SSL Inspection в FortiOS ?

FortiOS 5.4 and later:
config user setting 
# get
auth-type : http https ftp telnet 
auth-cert : Fortinet_Factory 
auth-ca-cert : 
auth-secure-http : disable 
auth-http-basic : disable 
auth-timeout : 5 
auth-timeout-type : idle-timeout 
auth-portal-timeout : 3 
radius-ses-timeout-act: hard-timeout 
auth-blackout-time : 0 
auth-invalid-max : 5 
auth-lockout-threshold: 3 
auth-lockout-duration: 0 
auth-ports:
The certificate Fortinet_Factory is used by default. To avoid errors, you can either change this certificate to the certificate used for SSL inspection or you can install this certificate on all client devices. Which solution you choose depends on your own environment and what certificates you are already using.
http://cookbook.fortinet.com/certificate-errors-blocked-websites/

среда, 15 ноября 2017 г.

Как включить прямое управление точкой доступа FortiAP, когда она подключена к Fortigate

Решение простое, нужно включить эту функцию в настройках WTP профиля подключения (FortiAP Profiles):

config wireless-controller wtp-profile
    edit "FAP221C-default"
         set allowaccess telnet
    end

в качестве опций доступны Telnet, SSH, HTTP, HTTPS

четверг, 19 октября 2017 г.

How to extend disk space in FortiAnalyzer VM

If running out of disk space on FortiAnalyzer VM running firmware version 5, these are the steps to extend the disk space.
1- Make sure that LVM service is running. Run the following command:
execute lvm info
The command output should provide a list of available disks:
FAZVM # execute lvm info
  disk01  In use          20.0(GB)
  disk02  Not present
  disk03  Not present
  disk04  Not present
  disk05  Not present
  disk06  Not present
  disk07  Not present
  disk08  Not present
From the output we can see that there is only one disk with a 20GB capacity.
If there is no output, then LVM has not started yet, run the following command:
execute lvm start
2 -  Stop the FortiAnalyzer VM and add a new disk to the Virtual Machine, for this example we are adding a 10GB disk
3 - After adding the new disk on the VM settings and booting up the FortiAnalyzer, run the command "get system status"
FAZVM # get sys status
Platform Type                   : FAZVM
Platform Full Name              : FortiAnalyzer-VM
.....
Disk Usage                      : Free 17.45GB, Total 19.68GB
.......
The system still recognizes only 20GB of capacity, it has not yet added the new disk.
Run command "execute lvm info", it now displays that a second 10GB disk is available but not in use:
FAZVM # exec lvm info
  disk01  In use          20.0(GB)
  disk02  Not in use      10.0(GB)
  disk03  Not present
  disk04  Not present
  disk05  Not present
  disk06  Not present
  disk07  Not present
  disk08  Not present
4 - If you run the "execute lvm extend" with no arguments, will provide the list of disks available to extend
FAZVM # exec lvm extend
Disk(s) currently not in use:
  disk02      10.0(GB)
More than one disk can be included at a time
FAZVM # exec lvm extend
extend [arg ...]    Argument list (0 to 7).
5 - Include the new disk, this command requires to reboot the FortiAnalyzer
FAZVM # exec lvm extend disk02
This operation will need to reboot the system.
Do you want to continue? (y/n)
6 - After reboot, the new disk has been included:
FAZVM # exec lvm info
  disk01  In use          20.0(GB)
  disk02  In use          10.0(GB)
  disk03  Not present
  disk04  Not present
  disk05  Not present
  disk06  Not present
  disk07  Not present
  disk08  Not present
FAZVM # get sys status
Platform Type                   : FAZVM
Platform Full Name              : FortiAnalyzer-VM
Version                         : v5.0-build0266 131108 (GA Patch 5)
....
Disk Usage                      : Free 27.29GB, Total 29.52GB
....


http://kb.fortinet.com/kb/documentLink.do?externalID=FD34953

воскресенье, 24 сентября 2017 г.

Разные Fortigate - get hardware status

FGT-80D # get hardware status 
Model name: FortiGate-80D
ASIC version: not available
CPU: Intel(R) Atom(TM) CPU N2600 @ 1.60GHz
Number of CPUs: 4
RAM: 1958 MB
Compact Flash: 980 MB /dev/sdb
Hard disk: 15272 MB /dev/sda
USB Flash: not available
Network Card chipset: RealTek RTL-8168 Gigabit Ethernet driver 8.038.00 (rev.)
FortiGate 92D:
get hardware status 
Model name: FortiGate-92D
ASIC version: not available
CPU: Intel(R) Atom(TM) CPU D525   @ 1.80GHz
Number of CPUs: 4
RAM: 1974 MB
Compact Flash: 15331 MB /dev/sda
Hard disk: 15272 MB /dev/sda
USB Flash: not available
Network Card chipset: Fortinet 92D Ethernet driver (rev.)
Network Card chipset: Intel(R) Gigabit Ethernet Network Driver (rev.0003)

Model name: FortiGate-90D ASIC version: FPGA ASIC SRAM: 64M CPU: FortiSOC2 Number of CPUs: 1 RAM: 1838 MB Compact Flash: 1907 MB /dev/sdb Hard disk: 30533 MB /dev/sda USB Flash: not available

Model name: FortiGate-500D
ASIC version: CP8
ASIC SRAM: 64M
CPU: Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz
Number of CPUs: 4
RAM: 7962 MB
Compact Flash: 15331 MB /dev/sda
Hard disk: 114473 MB /dev/sdb
USB Flash: not available
Network Card chipset: Intel(R) Gigabit Ethernet Network Driver (rev.0003)
Network Card chipset: FortiASIC NP6 Adapter (rev.)

Model name: FortiGate-200D
ASIC version: CP8
ASIC SRAM: 64M
CPU: Intel(R) Celeron(R) CPU G540 @ 2.50GHz
Number of CPUs: 2
RAM: 3955 MB
Compact Flash: 15331 MB /dev/sda
Hard disk: not available
USB Flash: not available
Network Card chipset: Intel(R) PRO/1000 Network Connection (rev.0000)

Model name: FortiWiFi-50E
ASIC version: not available
CPU: ARMv7
Number of CPUs: 2
RAM: 2021 MB
MTD Flash: 128 MB /dev/mtd
Hard disk: not available
USB Flash: not available
Network Card chipset: Marvell NETA Gigabit Ethernet driver 00000010 (rev.)
WiFi Chipset: Atheros 
WiFi firmware version: 0.9.17.1

FG1000D# get hardware status 
Model name: FortiGate-1000D
ASIC version: CP8
ASIC SRAM: 64M
CPU: Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz
Number of CPUs: 8
RAM: 15979 MB
Compact Flash: 3840 MB /dev/sda
Hard disk: 114473 MB /dev/sdb
USB Flash: not available
Network Card chipset: Intel(R) PRO/1000 Network Connection (rev.0000)
Network Card chipset: FortiASIC NP6 Adapter (rev.)

Model name: Fortigate-1500D
ASIC version: CP8
SRAM: 64M
CPU: Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz
Number of CPUs: 12
RAM: 15978 MB
Compact Flash: 30653 MB /dev/sda
Hard disk: 114473 MB /dev/sdb
USB Flash: not available

Model name: FortiWiFi-92D
ASIC version: not available
CPU: Intel(R) Atom(TM) CPU D525 @ 1.80GHz
Number of CPUs: 4
RAM: 1971 MB
Compact Flash: 15331 MB /dev/sda
Hard disk: 15272 MB /dev/sda
USB Flash: not available
Network Card chipset: Intel(R) Gigabit Ethernet Network Driver (rev.0003)
WiFi Chipset: Atheros AR9300
WiFi firmware version: 0.9.17.1
Model name: FortiGate-30D
ASIC version: CP0
ASIC SRAM: 64M
CPU: FortiSOC2
Number of CPUs: 1
RAM: 936 MB
Compact Flash: 3879 MB /dev/sda
Hard disk: not available
USB Flash: not available

Model name: Fortigate-60
ASIC version: CP5
SRAM: 64M
CPU: VIA Samuel 2
RAM: 123 MB<==128MB
Compact Flash: 30 MB /dev/hda<==32MB
USB Flash: not available
Network Card chipset: VIA VT6105M Rhine-III (rev.0x96)
Network Card chipset: VIA VT6102 Rhine-II (rev.0x51)

Model name: Fortigate-200A
ASIC version: CP5
SRAM: 64M
CPU: Intel(R) Celeron(TM) CPU 400MHz
RAM: 503 MB<==512MB
Compact Flash: 61 MB /dev/hda<==64MB
Hard disk: not available
USB Flash: not available
Network Card chipset: RealTek RTL8139 Fast Ethernet (rev.0x10)
Network Card chipset: VIA VT6102 Rhine-II (rev.0x74)

Model name: Fortigate-300A 
ASIC version: CP5 
SRAM: 64M 
CPU: Intel(R) Celeron(R) CPU 2.00GHz 
RAM: 503 MB<==512MB 
Compact Flash: 122 MB /dev/hdc 
Hard disk: not available 
USB Flash: not available 
Network Card chipset: Intel(R) PRO/100 M Desktop Adapter (rev.0x0010) 
Network Card chipset: Intel(R) PRO/1000 Network Connection (rev.0004)

Model name: Fortigate-400A
ASIC version: CP5
SRAM: 64M
CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz
RAM: 503 MB<==512MB
Compact Flash: 61 MB /dev/hdc<==64MB
Hard disk: not available
USB Flash: not available
Network Card chipset: Intel(R) PRO/100 M Desktop Adapter (rev.0x0010)
Network Card chipset: Intel(R) PRO/1000 Network Connection (rev.0004)

Model name: Fortigate-60B
ASIC version: CP6
ASIC SRAM: 64M
CPU: VIA Samuel 2
RAM: 248 MB
MTD Flash: 64 MB /dev/mtd
Hard disk: not available
USB Flash: not available
Built-in modem: Yes

пятница, 22 сентября 2017 г.

Как посмотреть статистику по физическим портам на Fortigate

FG300D # fnsysctl ifconfig port5
port5 Link encap:Ethernet  HWaddr 00:09:0F:09:00:08
 UP BROADCAST RUNNING PROMISC ALLMULTI SLAVE MULTICAST  MTU:1500  Metric:1
 RX packets:4376009 errors:0 dropped:0 overruns:0 frame:0
 TX packets:487393 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000 
 RX bytes:2881575345 (2.7 GB)  TX bytes:236369685 (225.4 MB) 
 
 
FG300D5 # fnsysctl ifconfig port6
port6 Link encap:Ethernet  HWaddr 00:09:0F:09:00:08
 UP BROADCAST RUNNING PROMISC ALLMULTI SLAVE MULTICAST  MTU:1500  Metric:1
 RX packets:4519551 errors:0 dropped:4 overruns:0 frame:0
 TX packets:459316 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000 
 RX bytes:1545202854 (1.4 GB)  TX bytes:223487935 (213.1 MB) 

diagnose hardware deviceinfo nic port6
Name            :np6_0
PCI Slot        :0000:01:00.0
irq             :16
Board           :FGT300d
SN              :FGT
Major ID        :5
Minor ID        :0
lif id          :5
lif oid         :135
netdev oid      :135
netdev flags    :1b03
Current_HWaddr   00:09:0f:09:00:08
Permanent_HWaddr 90:6c:ac:f5:86:51
phy name        :port6
bank_id         :1
phy_addr        :0x09
lane            :9
sw_port         :0
sw_np_port      :0
vid_phy[6]      :[0x07][0x00][0x00][0x00][0x00][0x00]
vid_fwd[6]      :[0x00][0x00][0x00][0x00][0x00][0x00]
oid_fwd[6]      :[0x00][0x00][0x00][0x00][0x00][0x00]
========== Link Status ==========
Admin           :up
netdev status   :up
autonego_setting:1
link_setting    :1
link_speed      :1000
link_duplex     :1
Speed           :1000
Duplex          :Full
link_status     :Up
rx_link_status  :1
int_phy_link    :0
local_fault     :0
local_warning   :0
remote_fault    :0
============ Counters ===========
Rx Pkts         :4521749
Rx Bytes        :1545937450
Tx Pkts         :466666
Tx Bytes        :229830999
Host Rx Pkts    :4502578
Host Rx Bytes   :1455559526
Host Rx dropped :0
Host Tx Pkts    :451840
Host Tx Bytes   :224220182
Host Tx dropped :0