WeChat Mini Program
Old Version Features

Optimizing High Throughput Inference on Graph Neural Networks at Shared Computing Facilities with the NVIDIA Triton Inference Server

Computing and Software for Big Science(2024)

Cited 1|Views22
Key words
Machine learning,Inference-as-a-service,Particle physics,Distributed computing,Heterogeneous computing,Graph neural network
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined