Searched refs:inference (Results 1 – 6 of 6) sorted by relevance
87 u32 inference; member110 lp->inference = 0; in tcp_lp_init()284 lp->inference = 3 * delta; in tcp_lp_pkts_acked()287 if (lp->last_drop && (now - lp->last_drop < lp->inference)) in tcp_lp_pkts_acked()
14 is a CPU-integrated inference accelerator for Computer Vision
15 designed to accelerate Deep Learning inference workloads.
18 designed to accelerate Deep Learning inference and training workloads.
19 - Edge AI - doing inference at an edge device. It can be an embedded ASIC/FPGA,
13 inference workloads. They are AI accelerators.