Deep Reinforcement Learning for a Dictionary Based Compression Schema (Student Abstract)

Keren Nivasch, Dana Shapira, Amos Azaria

פרסום מחקרי: פרק בספר / בדוח / בכנספרסום בספר כנסביקורת עמיתים

תקציר

An increasingly important process of the internet age and the massive data era is file compression. One popular compression scheme, Lempel-Ziv-Welch (LZW), maintains a dictionary of previously seen strings. The dictionary is updated throughout the parsing process by adding new encountered substrings. Klein, Opalinsky and Shapira (2019) recently studied the option of selectively updating the LZW dictionary. They show that even inserting only a random subset of the strings into the dictionary does not adversely affect the compression ratio. Inspired by their approach, we propose a reinforcement learning based agent, RLZW, that decides when to add a string to the dictionary. The agent is first trained on a large set of data, and then tested on files it has not seen previously (i.e., the test set). We show that on some types of input data, RLZW outperforms the compression ratio of a standard LZW.

שפה מקוריתאנגלית
כותר פרסום המארח35th AAAI Conference on Artificial Intelligence, AAAI 2021
מוציא לאורAssociation for the Advancement of Artificial Intelligence
עמודים15857-15858
מספר עמודים2
מסת"ב (אלקטרוני)9781713835974
סטטוס פרסוםפורסם - 2021
אירוע35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online
משך הזמן: 2 פבר׳ 20219 פבר׳ 2021

סדרות פרסומים

שם35th AAAI Conference on Artificial Intelligence, AAAI 2021
כרך18

כנס

כנס35th AAAI Conference on Artificial Intelligence, AAAI 2021
עירVirtual, Online
תקופה2/02/219/02/21

טביעת אצבע

להלן מוצגים תחומי המחקר של הפרסום 'Deep Reinforcement Learning for a Dictionary Based Compression Schema (Student Abstract)'. יחד הם יוצרים טביעת אצבע ייחודית.

פורמט ציטוט ביבליוגרפי