Ayuda
Ir al contenido

Dialnet


Resumen de HyTra:: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization

Muneeb Nasir, Kiara Esguerra, Ibrahima Faye, Tong Boon Tang, Mazlaini Yahya, Afidalina Tumian, Eric Tatt Wei Ho

  • The emerging demand for a variety of novel Location-based Services (LBS) by consumers and industrial users is driven by the rapid and extensive proliferation of mobile smart devices. Sensors embedded in smart devices or machines provide wireless connectivity and Global Positioning System (GPS) capability, and are co-utilized to acquire location-linked data which are algorithmically transformed into reliable and accurate location estimates. GPS is a mature and reliable technology for outdoor localization but indoor localization in a complex multi-storey building environment remains challenging due to fluctuations in wireless signal strength arising from multipath fading. Location-linked data from wireless access points (WAPs) such as received signal strength (RSS) are acquired as numerical sequences. By conceptualizing a fixed order sequence of WAP measurements as a sentence where the RSS from each WAP are words, we may leverage on recent advances in artificial intelligence for natural language processing (NLP) to enhance localization accuracy and improve robustness against signal fluctuations. We propose the hyper-class Transformer (HyTra), an encoder-only Transformer neural network which learns the relative positions of wireless access points (WAPs) through multiple learnable embeddings. We propose a second network, HyTra-HF, which improves upon HyTra by applying a hierarchical relationship between location classes. We test our proposed networks on public and private datasets varying in sizes. HyTra-HF outperforms existing deep learning solutions by obtaining 96.7\% accuracy for the floor classification task on the UJIIndoorloc dataset. HyTra-HF is amenable to deep model compression and achieves accuracy of 95.95\% with over ten-fold reduction in model size using Sparsity Aware Orthogonal (SAO) initialization and has the best-in-class accuracy for the sparse model.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus