cost of design evaluations • FlexiBO is a cost-aware approach for multi- objective optimization that iteratively selects a design and an objective for evaluation. • It allows us to trade o ff the additional information gained through an evaluation and the cost incurred due to the evaluation.
are the discovered sub-networks (e.g., adversarial attack, distributional shift)? • Is there any always-winning lottery ticket hidden in a randomly initialized network? • Is it possible to train the sparse sub-network e ffi ciently?
than other learning schemes. • Semi-supervised learning (SL- CL or SCL-CL) is more robust than CL. Is there anything special about contrastive learning in terms of adversarial robustness?
Fully adversarial fine-tuning can improve clean accuracy (red line) and robustness (blue line) by eliminating these similarities. • The lack of differentiated layer- wise representations after adversarial training may hinder neural networks from achieving high clean/adversarial accuracy. Is there anything special about contrastive learning in terms of adversarial robustness?
B-n B-1 B-2 B-2 B-n B-n SG1 SG2 SG2 Mapping M1 M2 M3 M4 HOST/CPU PCIE Switch PCIE Switch M5 M6 M7 M8 C0 C1 C2 PCIE C4 C5 C6 C7 D2D C0 C1 C2 C3 PCIE C4 C5 C6 C7 D2D Hetrogeneous System Interconnect Graph PE0 PE3 PE1 PE2 D2D-M D2D-N DDR-N DDR-S PCIE Vendor A Intra-Chiplet Interconnect Graph PE0 PE3 CONV RISCV D2D-M D2D-N DDR-S PCIE Vendor B Intra-Chiplet Interconnect Graph Workload Computation Graph C3 Time Module# - Chiplet# FRAMEWORK OUTPUTS FRAMEWORK INPUTS 1 2 4 5 6 4 2 Inter-Chiplet Interconnect Graph Inter-Chiplet Interconnect Graph C C C C C C C CA C C C C C C C CB 3 Set of Chiplets Hardware-aware partitioning and mapping for multi-chiplet and multi-card AI inference systems
inference to enable planning and verification in autonomous systems • RQ1: How can structural causal models be integrated with probabilistic model checking to provide a framework for planning tasks in autonomous systems? • RQ2: How can counterfactual reasoning be integrated with probabilistic model checking to analyze the effect of interventions that have not been observed in the system's behavior?
Modular networks can automatically decompose the shapes into different learnable representations. • With the introduction of the ID classifier, the decomposition is improved significantly, where a large majority of the images for each shape are passed through one module.
this pre-trained model help us to use fewer data for fine-tuning? • Does the result of this fine-tuning depend on the languages used for pretraining? • How robust is this fine-tuned model with respect to the distribution shift of test data compared to fine-tuning data?
Single/multi-cycle microarchitectures waste time using the critical path. • If the longest instruction takes 1100 ps, then every instruction takes 1100 ps. • Solution: • Use a timer to measure the time. • Set the timer to the duration of the instruction. • When the timer runs out, move to the next instruction. Single-Cycle Multi-Cycle Unit-Cycle Clock Period (ps) 1100 300 100 Cycles Executed 360 1,316 2,748 Execution Time (ps) 396,000 394,800 274,800 Benchmark program: Square root Unit-Cycle is more than 40% faster than Single-Cycle or Multi-Cycle
Areas: - Causal AI - ML for Systems - Systems for ML - Adversarial ML - Robot Learning - Representation Learning Sponsors: Collaborators: Saeid Ghafouri (PhD student) Fatemeh Ghofrani (PhD student) Abir Hossen (PhD student) Shahriar Iqbal (PhD student) Sonam Kharde (Postdoc) Hamed Damirchi (PhD student) Mehdi Yaghouti (Postdoc) Samuel Whidden (Undergraduate) Rasool Shari fi (PhD student) Kimia Noorbakhsh (Undergraduate)