亚盘王

<address id="rbj1n"></address>
<address id="rbj1n"><dfn id="rbj1n"></dfn></address>
<sub id="rbj1n"><dfn id="rbj1n"><ins id="rbj1n"></ins></dfn></sub>

<address id="rbj1n"><dfn id="rbj1n"><ins id="rbj1n"></ins></dfn></address>

<sub id="rbj1n"><dfn id="rbj1n"><ins id="rbj1n"></ins></dfn></sub>

    <form id="rbj1n"></form>
    <thead id="rbj1n"><var id="rbj1n"><ins id="rbj1n"></ins></var></thead>

    <address id="rbj1n"><dfn id="rbj1n"><mark id="rbj1n"></mark></dfn></address><sub id="rbj1n"><listing id="rbj1n"><menuitem id="rbj1n"></menuitem></listing></sub><sub id="rbj1n"><dfn id="rbj1n"></dfn></sub>
    <form id="rbj1n"></form><sub id="rbj1n"><var id="rbj1n"><mark id="rbj1n"></mark></var></sub>
      <form id="rbj1n"><nobr id="rbj1n"></nobr></form>
        <address id="rbj1n"><listing id="rbj1n"><menuitem id="rbj1n"></menuitem></listing></address><address id="rbj1n"><listing id="rbj1n"></listing></address>
        <sub id="rbj1n"><var id="rbj1n"><ins id="rbj1n"></ins></var></sub><sub id="rbj1n"><listing id="rbj1n"></listing></sub>
        <sub id="rbj1n"><dfn id="rbj1n"><mark id="rbj1n"></mark></dfn></sub>

        <thead id="rbj1n"><var id="rbj1n"><output id="rbj1n"></output></var></thead>
        <sub id="rbj1n"><var id="rbj1n"><mark id="rbj1n"></mark></var></sub>
        <address id="rbj1n"></address>
          <sub id="rbj1n"><dfn id="rbj1n"><mark id="rbj1n"></mark></dfn></sub>
          <address id="rbj1n"><listing id="rbj1n"></listing></address>

          <sub id="rbj1n"><var id="rbj1n"><mark id="rbj1n"></mark></var></sub><form id="rbj1n"></form><sub id="rbj1n"><var id="rbj1n"><ins id="rbj1n"></ins></var></sub>

          <sub id="rbj1n"><var id="rbj1n"><ins id="rbj1n"></ins></var></sub>

          Events

          Position : Home > Events > Content

          From Matrix to Tensor: Algorithm and Hardware Co-Design for Energy-Efficient Deep Learning

          Time: Jun 24, 2019

          地址 1012 meeting roomof North Campus 事件时间: 2019-06-27 10:30:00

          https://meeting.xidian.edu.cn/uploads/images/201906/1561339389.jpg

          Title:

          From   Matrix to Tensor: Algorithm and Hardware Co-Design for Energy-Efficient   Deep Learning

          Lecturer:

          Bo   Yuan

          Time:

          2019-06-27 10:30:00

          Venue:

          1012 meeting roomof North Campus

          Lecturer    Profile

          Dr. Bo Yuan is currently the assistant professor in the   Department of Electrical and Computer Engineering in Rutgers University.   Before that, he was with City University of New York from 2015-2018. Dr. Bo   Yuan received his bachelor and master degrees from Nanjing University, China   in 2007 and 2010, respectively. He received his PhD degree from Department of   Electrical and Computer Engineering at University of Minnesota, Twin Cities   in 2015.

          His   research interests include algorithm and hardware co-design and   implementation for machine learning and signal processing systems,   error-resilient low-cost computing techniques for embedded and IoT systems   and machine learning for domain-specific applications. He is the recipient of   Global Research Competition Finalist Award in Broadcom Corporation. Dr. Yuan   serves as technical committee track chair and technical committee member for   several IEEE/ACM. He is the associated editor of Springer Journal of Signal   Processing System.

          Lecture  Abstract

          In the emerging artificial intelligence era, deep neural   networks (DNNs), a.k.a. deep learning, have gained unprecedented success in   various applications. However, DNNs are usually storage intensive,   computation intensive and very energy consuming, thereby posing severe   challenges on the future wide deployment in many application scenarios,   especially for the resource-constraint low-power IoT application and embedded   systems.

          In   this talk, I will introduce my recent algorithm/hardware co-design works for   energy-efficient DNN (MICRO'17,MICRO'18, ISCA'19).  First, I will show   the use of low displacement rank (LDR) matrices can enable the construction   of low-complexity DNN models as well as the corresponding energy-efficient   DNN hardware accelerators. In the second part of my talk, I will show the   benefit of using permuted diagonal matrix, as another type of structured and   sparse matrix, for the energy-efficient DNN hardware design. Finally, I will   introduce the benefits of tensor decomposition for DNN design and the   corresponding high-performance DNN accelerator.

           

          Close

          <address id="rbj1n"></address>
          <address id="rbj1n"><dfn id="rbj1n"></dfn></address>
          <sub id="rbj1n"><dfn id="rbj1n"><ins id="rbj1n"></ins></dfn></sub>

          <address id="rbj1n"><dfn id="rbj1n"><ins id="rbj1n"></ins></dfn></address>

          <sub id="rbj1n"><dfn id="rbj1n"><ins id="rbj1n"></ins></dfn></sub>

            <form id="rbj1n"></form>
            <thead id="rbj1n"><var id="rbj1n"><ins id="rbj1n"></ins></var></thead>

            <address id="rbj1n"><dfn id="rbj1n"><mark id="rbj1n"></mark></dfn></address><sub id="rbj1n"><listing id="rbj1n"><menuitem id="rbj1n"></menuitem></listing></sub><sub id="rbj1n"><dfn id="rbj1n"></dfn></sub>
            <form id="rbj1n"></form><sub id="rbj1n"><var id="rbj1n"><mark id="rbj1n"></mark></var></sub>
              <form id="rbj1n"><nobr id="rbj1n"></nobr></form>
                <address id="rbj1n"><listing id="rbj1n"><menuitem id="rbj1n"></menuitem></listing></address><address id="rbj1n"><listing id="rbj1n"></listing></address>
                <sub id="rbj1n"><var id="rbj1n"><ins id="rbj1n"></ins></var></sub><sub id="rbj1n"><listing id="rbj1n"></listing></sub>
                <sub id="rbj1n"><dfn id="rbj1n"><mark id="rbj1n"></mark></dfn></sub>

                <thead id="rbj1n"><var id="rbj1n"><output id="rbj1n"></output></var></thead>
                <sub id="rbj1n"><var id="rbj1n"><mark id="rbj1n"></mark></var></sub>
                <address id="rbj1n"></address>
                  <sub id="rbj1n"><dfn id="rbj1n"><mark id="rbj1n"></mark></dfn></sub>
                  <address id="rbj1n"><listing id="rbj1n"></listing></address>

                  <sub id="rbj1n"><var id="rbj1n"><mark id="rbj1n"></mark></var></sub><form id="rbj1n"></form><sub id="rbj1n"><var id="rbj1n"><ins id="rbj1n"></ins></var></sub>

                  <sub id="rbj1n"><var id="rbj1n"><ins id="rbj1n"></ins></var></sub>

                  蚂蚁彩票购彩大厅

                  万润彩票平台-首页

                  中国福彩官方app下载安装

                  赛事竞猜|app

                  彩福彩票官方网站

                  新皇冠体育

                  大发888在线娱乐游戏平台

                  金鼎的彩的网址

                  信的彩的网的登陆网址