Enhancing Deep Learning Performance Using Displaced Rectifier Linear Unit
(eBook)

Book Cover
Average Rating
Published
Editora Dialética, 2022.
Status
Available Online

Description

Loading Description...

Also in this Series

Checking series information...

More Like This

Loading more titles like this title...

More Details

Format
eBook
Language
English
ISBN
9786525230757

Reviews from GoodReads

Loading GoodReads Reviews.

Citations

APA Citation, 7th Edition (style guide)

David Macêdo., & David Macêdo|AUTHOR. (2022). Enhancing Deep Learning Performance Using Displaced Rectifier Linear Unit . Editora Dialética.

Chicago / Turabian - Author Date Citation, 17th Edition (style guide)

David Macêdo and David Macêdo|AUTHOR. 2022. Enhancing Deep Learning Performance Using Displaced Rectifier Linear Unit. Editora Dialética.

Chicago / Turabian - Humanities (Notes and Bibliography) Citation, 17th Edition (style guide)

David Macêdo and David Macêdo|AUTHOR. Enhancing Deep Learning Performance Using Displaced Rectifier Linear Unit Editora Dialética, 2022.

MLA Citation, 9th Edition (style guide)

David Macêdo, and David Macêdo|AUTHOR. Enhancing Deep Learning Performance Using Displaced Rectifier Linear Unit Editora Dialética, 2022.

Note! Citations contain only title, author, edition, publisher, and year published. Citations should be used as a guideline and should be double checked for accuracy. Citation formats are based on standards as of August 2021.

Staff View

Go To Grouped Work

Grouping Information

Grouped Work IDc381cd28-5e07-a2d6-d454-37229d02ee80-eng
Full titleenhancing deep learning performance using displaced rectifier linear unit
Authormacêdo david
Grouping Categorybook
Last Update2023-09-08 20:56:27PM
Last Indexed2024-04-20 02:49:42AM

Book Cover Information

Image Sourcehoopla
First LoadedNov 11, 2022
Last UsedDec 10, 2023

Hoopla Extract Information

stdClass Object
(
    [year] => 2022
    [artist] => David Macêdo
    [fiction] => 
    [coverImageUrl] => https://cover.hoopladigital.com/bkw_9786525230757_270.jpeg
    [titleId] => 14984198
    [isbn] => 9786525230757
    [abridged] => 
    [language] => ENGLISH
    [profanity] => 
    [title] => Enhancing Deep Learning Performance Using Displaced Rectifier Linear Unit
    [demo] => 
    [segments] => Array
        (
        )

    [children] => 
    [artists] => Array
        (
            [0] => stdClass Object
                (
                    [name] => David Macêdo
                    [artistFormal] => Macêdo, David
                    [relationship] => AUTHOR
                )

        )

    [genres] => Array
        (
        )

    [price] => 0.9
    [id] => 14984198
    [edited] => 
    [kind] => EBOOK
    [active] => 1
    [upc] => 
    [synopsis] => Recently, deep learning has caused a significant impact on computer vision, speech recognition, and natural language understanding. In spite of the remarkable advances, deep learning recent performance gains have been modest and usually rely on increasing the depth of the models, which often requires more computational resources such as processing time and memory usage. To tackle this problem, we turned our attention to the interworking between the activation functions and the batch normalization, which is virtually mandatory currently. In this work, we propose the activation function Displaced Rectifier Linear Unit (DReLU) by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization. Moreover, we used statistical tests to compare the impact of using distinct activation functions (ReLU, LReLU, PReLU, ELU, and DReLU) on the learning speed and test accuracy performance of VGG and Residual Networks state-of-the-art models. These convolutional neural networks were trained on CIFAR-10 and CIFAR-100, the most commonly used deep learning computer vision datasets. The results showed DReLU speeded up learning in all models and datasets. Besides, statistical significant performance assessments (p<0.05) showed DReLU enhanced the test accuracy obtained by ReLU in all scenarios. Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception.
    [url] => https://www.hoopladigital.com/title/14984198
    [pa] => 
    [publisher] => Editora Dialética
    [purchaseModel] => INSTANT
)