It's a load generating applicance It's on the support page.
I installed it, but haven't had a chance to read about how to use it. I logged into the web interface and didn't see anything to start load generation.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
It's a load generating applicance It's on the support page.
With jumbo frames, one needs to ensure that all the devices in the network have Jumbo Frames configured. It can become really a pain to ensure that. Do you know if you have that configured everywhere? If not, have you tried disabling Jumbo Frames?
edit: Nm, I was havnig a friday moment. Where are you seeing the discarded packets?
Port-Channel13 is up, line protocol is up (connected)
Hardware is Port-Channel, address is 001c.734c.d0c3
Description: INF:TINTRISANa
Ethernet MTU 9214 bytes , BW 20000000 kbit
Full-duplex, 20Gb/s
Active members in this channel: 2
... Ethernet42 , Full-duplex, 10Gb/s
... PeerEthernet42 , Full-duplex, 10Gb/s
Fallback mode is: off
Up 20 minutes
2 link status changes since last clear
Last clearing of "show interface" counters 1:05:39 ago
30 seconds input rate 20 bps (0.0% with framing overhead), 0 packets/sec
30 seconds output rate 8.86 kbps (0.0% with framing overhead), 0 packets/sec
125 packets input, 16000 bytes
Received 0 broadcasts, 125 multicast
0 input errors, 0 input discards
847 packets output, 3994227 bytes
Sent 684 broadcasts, 127 multicast
0 output errors, 1543 output discards
Port-Channel13 is up, line protocol is up (connected)
Hardware is Port-Channel, address is 001c.734c.a639
Description: INF:TINTRISANa
Ethernet MTU 9214 bytes , BW 20000000 kbit
Full-duplex, 20Gb/s
Active members in this channel: 2
... Ethernet42 , Full-duplex, 10Gb/s
... PeerEthernet42 , Full-duplex, 10Gb/s
Fallback mode is: off
Up 20 minutes, 32 seconds
2 link status changes since last clear
Last clearing of "show interface" counters 1:04:53 ago
30 seconds input rate 179 bps (0.0% with framing overhead), 0 packets/sec
30 seconds output rate 263 kbps (0.0% with framing overhead), 13 packets/sec
434 packets input, 35648 bytes
Received 311 broadcasts, 123 multicast
0 input errors, 3 input discards
49329 packets output, 106710450 bytes
Sent 34992 broadcasts, 13843 multicast
0 output errors, 61658 output discards
/> cd /net/pNics/vmnic5
/net/pNics/vmnic5/> cat stats
device {
-- General Statistics:
Rx Packets:3078708
Tx Packets:1520882
Rx Bytes:25539468527
Tx Bytes:149940582
Rx Errors:0
Tx Errors:0
Rx Dropped:0
Tx Dropped:0
Multicast:1500
Collisions:0
Rx Length Errors:0
Rx Over Errors:0
Rx CRC Errors:0
Rx Frame Errors:0
Rx Fifo Errors:0
Rx Missed Errors:0
Tx Aborted Errors:0
Tx Carrier Errors:0
Tx Fifo Errors:0
Tx Heartbeat Errors:0
Tx Window Errors:0
Module Interface Rx packets:0
Module Interface Tx packets:0
Module Interface Rx dropped:0
Module Interface Tx dropped:0
-- Driver Specific Statistics:
rx_packets : 3078714
tx_packets : 1520887
rx_bytes : 25539504882
tx_bytes : 149941862
rx_errors : 0
tx_errors : 0
rx_dropped : 0
tx_dropped : 0
multicast : 1501
collisions : 0
rx_over_errors : 0
rx_crc_errors : 0
rx_frame_errors : 0
rx_fifo_errors : 0
rx_missed_errors : 0
tx_aborted_errors : 0
tx_carrier_errors : 0
tx_fifo_errors : 0
tx_heartbeat_errors : 0
rx_pkts_nic : 3078722
tx_pkts_nic : 1520695
rx_bytes_nic : 25564131733
tx_bytes_nic : 162089854
lsc_int : 3
tx_busy : 0
non_eop_descs : 0
broadcast : 1984
rx_no_buffer_count : 0
tx_timeout_count : 0
tx_restart_queue : 0
rx_long_length_errors : 0
rx_short_length_errors : 0
tx_flow_control_xon : 0
rx_flow_control_xon : 0
tx_flow_control_xoff : 0
rx_flow_control_xoff : 0
rx_csum_offload_errors : 0
rx_header_split : 0
alloc_rx_page_failed : 0
alloc_rx_buff_failed : 0
rx_no_dma_resources : 0
hw_rsc_aggregated : 0
hw_rsc_flushed : 0
fdir_match : 0
fdir_miss : 0
fdir_overflow : 0
fcoe_bad_fccrc : 0
fcoe_last_errors : 0
rx_fcoe_dropped : 0
rx_fcoe_packets : 0
rx_fcoe_dwords : 0
fcoe_noddp : 0
fcoe_noddp_ext_buff : 0
tx_fcoe_packets : 0
tx_fcoe_dwords : 0
os2bmc_rx_by_bmc : 0
os2bmc_tx_by_bmc : 0
os2bmc_tx_by_host : 0
os2bmc_rx_by_host : 0
tx_queue_0_packets : 1520887
tx_queue_0_bytes : 149941862
tx_queue_1_packets : 0
tx_queue_1_bytes : 0
tx_queue_2_packets : 0
tx_queue_2_bytes : 0
tx_queue_3_packets : 0
tx_queue_3_bytes : 0
tx_queue_4_packets : 0
tx_queue_4_bytes : 0
rx_queue_0_packets : 202826
rx_queue_0_bytes : 1519595198
rx_queue_1_packets : 2875888
rx_queue_1_bytes : 24019909684
rx_queue_2_packets : 0
rx_queue_2_bytes : 0
rx_queue_3_packets : 0
rx_queue_3_bytes : 0
rx_queue_4_packets : 0
rx_queue_4_bytes : 0
tx_pb_0_pxon : 0
tx_pb_0_pxoff : 0
tx_pb_1_pxon : 0
tx_pb_1_pxoff : 0
tx_pb_2_pxon : 0
tx_pb_2_pxoff : 0
tx_pb_3_pxon : 0
tx_pb_3_pxoff : 0
tx_pb_4_pxon : 0
tx_pb_4_pxoff : 0
tx_pb_5_pxon : 0
tx_pb_5_pxoff : 0
tx_pb_6_pxon : 0
tx_pb_6_pxoff : 0
tx_pb_7_pxon : 0
tx_pb_7_pxoff : 0
rx_pb_0_pxon : 0
rx_pb_0_pxoff : 0
rx_pb_1_pxon : 0
rx_pb_1_pxoff : 0
rx_pb_2_pxon : 0
rx_pb_2_pxoff : 0
rx_pb_3_pxon : 0
rx_pb_3_pxoff : 0
rx_pb_4_pxon : 0
rx_pb_4_pxoff : 0
rx_pb_5_pxon : 0
rx_pb_5_pxoff : 0
rx_pb_6_pxon : 0
rx_pb_6_pxoff : 0
rx_pb_7_pxon : 0
rx_pb_7_pxoff : 0
/net/pNics/vmnic5/> cd /net/pNics/vmnic7
/net/pNics/vmnic7/> cat stats
device {
-- General Statistics:
Rx Packets:6221163
Tx Packets:8717772
Rx Bytes:28650214901
Tx Bytes:50595013796
Rx Errors:0
Tx Errors:0
Rx Dropped:0
Tx Dropped:0
Multicast:1551
Collisions:0
Rx Length Errors:0
Rx Over Errors:0
Rx CRC Errors:0
Rx Frame Errors:0
Rx Fifo Errors:0
Rx Missed Errors:0
Tx Aborted Errors:0
Tx Carrier Errors:0
Tx Fifo Errors:0
Tx Heartbeat Errors:0
Tx Window Errors:0
Module Interface Rx packets:0
Module Interface Tx packets:0
Module Interface Rx dropped:0
Module Interface Tx dropped:0
-- Driver Specific Statistics:
rx_packets : 6221179
tx_packets : 8717787
rx_bytes : 28650224720
tx_bytes : 50595023506
rx_errors : 0
tx_errors : 0
rx_dropped : 0
tx_dropped : 0
multicast : 1552
collisions : 0
rx_over_errors : 0
rx_crc_errors : 0
rx_frame_errors : 0
rx_fifo_errors : 0
rx_missed_errors : 0
tx_aborted_errors : 0
tx_carrier_errors : 0
tx_fifo_errors : 0
tx_heartbeat_errors : 0
rx_pkts_nic : 6221187
tx_pkts_nic : 8026311
rx_bytes_nic : 28699991139
tx_bytes_nic : 50613591674
lsc_int : 3
tx_busy : 0
non_eop_descs : 0
broadcast : 2086
rx_no_buffer_count : 0
tx_timeout_count : 0
tx_restart_queue : 0
rx_long_length_errors : 0
rx_short_length_errors : 0
tx_flow_control_xon : 0
rx_flow_control_xon : 0
tx_flow_control_xoff : 0
rx_flow_control_xoff : 0
rx_csum_offload_errors : 0
rx_header_split : 0
alloc_rx_page_failed : 0
alloc_rx_buff_failed : 0
rx_no_dma_resources : 0
hw_rsc_aggregated : 0
hw_rsc_flushed : 0
fdir_match : 0
fdir_miss : 0
fdir_overflow : 0
fcoe_bad_fccrc : 0
fcoe_last_errors : 0
rx_fcoe_dropped : 0
rx_fcoe_packets : 0
rx_fcoe_dwords : 0
fcoe_noddp : 0
fcoe_noddp_ext_buff : 0
tx_fcoe_packets : 0
tx_fcoe_dwords : 0
os2bmc_rx_by_bmc : 0
os2bmc_tx_by_bmc : 0
os2bmc_tx_by_host : 0
os2bmc_rx_by_host : 0
tx_queue_0_packets : 8717787
tx_queue_0_bytes : 50595023506
tx_queue_1_packets : 0
tx_queue_1_bytes : 0
tx_queue_2_packets : 0
tx_queue_2_bytes : 0
tx_queue_3_packets : 0
tx_queue_3_bytes : 0
tx_queue_4_packets : 0
tx_queue_4_bytes : 0
rx_queue_0_packets : 245875
rx_queue_0_bytes : 1737630756
rx_queue_1_packets : 5975304
rx_queue_1_bytes : 26912593964
rx_queue_2_packets : 0
rx_queue_2_bytes : 0
rx_queue_3_packets : 0
rx_queue_3_bytes : 0
rx_queue_4_packets : 0
rx_queue_4_bytes : 0
tx_pb_0_pxon : 0
tx_pb_0_pxoff : 0
tx_pb_1_pxon : 0
tx_pb_1_pxoff : 0
tx_pb_2_pxon : 0
tx_pb_2_pxoff : 0
tx_pb_3_pxon : 0
tx_pb_3_pxoff : 0
tx_pb_4_pxon : 0
tx_pb_4_pxoff : 0
tx_pb_5_pxon : 0
tx_pb_5_pxoff : 0
tx_pb_6_pxon : 0
tx_pb_6_pxoff : 0
tx_pb_7_pxon : 0
tx_pb_7_pxoff : 0
rx_pb_0_pxon : 0
rx_pb_0_pxoff : 0
rx_pb_1_pxon : 0
rx_pb_1_pxoff : 0
rx_pb_2_pxon : 0
rx_pb_2_pxoff : 0
rx_pb_3_pxon : 0
rx_pb_3_pxoff : 0
rx_pb_4_pxon : 0
rx_pb_4_pxoff : 0
rx_pb_5_pxon : 0
rx_pb_5_pxoff : 0
rx_pb_6_pxon : 0
rx_pb_6_pxoff : 0
rx_pb_7_pxon : 0
rx_pb_7_pxoff : 0
HAHAHA! Support guys find it?
No, I was going to email them the config and when I pasted it into an email, I was like "I am an idiot". Then I fixed it.
It's if the system is overloaded - it's internal scsi queues, waiting on SSD, or out of RPC server conditions.
One thing that really impresses me is the SAN to SAN replication. I enabled replication on a 650 GB file server only a few hours ago and it has already been replicated from NY to NJ over the same 1 Gb connection that I was using for Zerto. Now when I tried to use Zerto, it would easily take at least 24 hours to replicate something that large because it is reading the entire VMDK file from the hypervisor level.
Not convinced that it's apples to apples when comparing array based replication to software based replication that runs from a VM. Most (all?) other hardware vendors offer array based replication as well.
I agree, but it beats our NetApp SnapMirror as well, and I didn't believe it myself when it said the replica was up to date and cloned it in the recovery site just to be sure. I still have more to replicate, so I will do a test and pay closer attention to how long it takes.
And although Zerto vs Tintri may not be apples to apples, based on my experience with Zerto, what would have taken a week, took less than a day with Tintri.
Oh and with SnapMirror we have to play around with the TCP window size to try and get the best performance. With Tintri, I just enable it and pick a schedule.
Initial replication on a 270 GB VM took about 45 minutes.
wtf on a 1gb link? Holy moly, that is insane.