↓ Skip to main content

Pre-trained transformer-based language models for Sundanese

Overview of attention for article published in Journal of Big Data, April 2022
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
4 Dimensions

Readers on

mendeley
51 Mendeley
Title
Pre-trained transformer-based language models for Sundanese
Published in
Journal of Big Data, April 2022
DOI 10.1186/s40537-022-00590-7
Authors

Wilson Wongso, Henry Lucky, Derwin Suhartono

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 51 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 51 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 4 8%
Student > Bachelor 4 8%
Lecturer 4 8%
Other 2 4%
Librarian 1 2%
Other 5 10%
Unknown 31 61%
Readers by discipline Count As %
Computer Science 16 31%
Unspecified 1 2%
Agricultural and Biological Sciences 1 2%
Engineering 1 2%
Design 1 2%
Other 0 0%
Unknown 31 61%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 15 February 2023.
All research outputs
#18,855,685
of 23,367,368 outputs
Outputs from Journal of Big Data
#268
of 354 outputs
Outputs of similar age
#318,012
of 442,819 outputs
Outputs of similar age from Journal of Big Data
#9
of 10 outputs
Altmetric has tracked 23,367,368 research outputs across all sources so far. This one is in the 11th percentile – i.e., 11% of other outputs scored the same or lower than it.
So far Altmetric has tracked 354 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 11.3. This one is in the 4th percentile – i.e., 4% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 442,819 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 18th percentile – i.e., 18% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 10 others from the same source and published within six weeks on either side of this one.