|
Dissertations |
|
|
1
|
-
Mayki dos Santos Oliveira
-
A Group-Based Smart Home Recommender System
-
Advisor : FREDERICO ARAUJO DURAO
-
COMMITTEE MEMBERS :
-
FREDERICO ARAUJO DURAO
-
RAFAEL AUGUSTO DE MELO
-
ROSALVO FERREIRA DE OLIVEIRA NETO
-
Data: Feb 24, 2025
-
-
Show Abstract
-
Smart homes, or Smart Homes, are environments that use devices connected via IoT to collect data and automate tasks, promoting practicality and comfort. Existing proposals seek to analyze residents' behavior to improve the experience and reduce the need for direct interaction with devices. However, scenarios with multiple residents and more complex devices present challenges, such as conflicting preferences due to diverse needs and behaviors. These disagreements can be frequent in families, where individuals have different levels of authority and preferences. A viable solution is group-oriented Recommender Systems (SRs) in Smart Homes (CIs), which model collective preferences, prioritizing group comfort over individual choices. This work proposes developing a recommendation model based on machine learning to identify behavior patterns and generate personalized suggestions considering historical data. The objective is to minimize conflicts and optimize the use of devices, promoting a more harmonious and efficient environment. The model was evaluated in three simulated scenarios, obtaining an average accuracy of 74% in recommending actions for the devices.
|
|
|
2
|
-
Andressa Mirella Filgueiras da Silva
-
TEACHING PROGRAMMING THROUGH EDUCATIONAL ROBOTICS: THE CONTEXT OF STUDENTS IN THE PROFESSIONAL EDUCATION IN INFORMATICS AT CETEP/LNAB
-
Advisor : MARLO VIEIRA DOS SANTOS E SOUZA
-
COMMITTEE MEMBERS :
-
KARINA MOREIRA MENEZES
-
LUMA DA ROCHA SEIXAS
-
MARLO VIEIRA DOS SANTOS E SOUZA
-
Data: Mar 26, 2025
-
-
Show Abstract
-
Educational Robotics stands out as a pedagogical approach capable of enhancing learning in computer programming and fostering student engagement in Vocational and Technological Education (EPT, in Portuguese). This dissertation aims to investigate the impact of Educational Robotics in the context of integrated technical secondary education, using a participatory research carried out through a Robotics Club in a Territorial Center for Vocational Education. The adopted methodology included an initial assessment of the students' levels of engagement and prior knowledge, followed by weekly practical activities with the application of programming concepts mediated by robotics. In the final stage, the evolution of the participants' engagement and technical knowledge was quantitatively and qualitatively evaluated. The results demonstrate a significant increase in the students' emotional, cognitive and behavioral engagement, as well as a consistent improvement in the mastery of technical skills in programming. It is concluded that the use of Educational Robotics is effective in reducing cognitive barriers and promoting a more interactive and accessible learning environment for students in the technical modality integrated with High School.
|
|
|
3
|
-
JULIANA CONCEIÇÃO SANTOS
-
Process Checklist: Transparency-Oriented Checklist for BPMN Process Inspection
-
Advisor : RITA SUZANA PITANGUEIRA MACIEL
-
COMMITTEE MEMBERS :
-
HENRIQUE PRADO DE SÁ SOUSA
-
CLAUDIO NOGUEIRA SANT ANNA
-
RITA SUZANA PITANGUEIRA MACIEL
-
Data: Jun 5, 2025
-
-
Show Abstract
-
In recent years there has been an interest in quality assurance processes for Process models in \acf{BPMN}. This can be achieved through inspection, a static analysis technique that demonstrates potential for identifying problems in software artifacts. Inspection of process models with checklists, although still little explored in the process literature, is an important instrument to support the inspection process in detecting defects, aiming at the quality of the artifacts. The complexity of process models, the lack of studies on inspection performed by humans and the search for quality of the models have driven this research, focusing on quality Transparency. The transparency of BPMN models contributes to the description of process models, improving understanding and comprehension, benefiting not only internal efficiency such as management and communication, but also resulting in strategic benefits, such as good reputation, credibility and the sharing of quality information. Thus, by evaluating the items of the checklist that supports the assessment of the quality of the models, systematically and adding the knowledge of the Transparency Catalog, in the checklist of an organization in the justice sector, opportunities were identified to evolve the practices of verification of process models in BPMN. This master's dissertation aims to propose a Checklist (Process Checklist), an inspection tool focused on transparency, to enhance the quality of process models in BPMN, identifying quality problems in the model. For this purpose, a literature review was initially carried out to identify studies on inspection of BPMN process models. The checklist, developed for human use, was evaluated anonymously by process modeling experts. Based on the results of the first evaluation of the usability, efficiency and effectiveness of the Process Checklist, carried out by five experts, opportunities for improvement were identified that were implemented to ensure transparent, objective, reliable and quality process models. In the second performance evaluation, the Process Checklist was more effective in ensuring model quality compared to BPCheck.
|
|
|
4
|
-
Cleiton Otavio da Exaltação Rocha
-
Detection of potentially untrustworthy companies through government procurement extracts: an application with natural language models
-
Advisor : GECYNALDA SOARES DA SILVA GOMES
-
COMMITTEE MEMBERS :
-
GECYNALDA SOARES DA SILVA GOMES
-
MARLO VIEIRA DOS SANTOS E SOUZA
-
RICARDO FERREIRA DA ROCHA
-
Data: Jun 5, 2025
-
-
Show Abstract
-
In the context of government procurement in Brazil, efficiency and continuous monitoring of spending represent significant challenges to public management. Brazil’s government, in 2023, issued the amount of 1.761.910 invoices to different types of purchases, resulting in an amount of R$ 76,62 billions in negotiations with private entities (Transparˆencia 2024). The acquisition of these supplies is distributed in various locations throughout the country, generating a growing and diverse volume of information, found in contracts and invoices of products and services. However, these government purchases are frequently a fertile soil to the occurrence of colluding and fraud (OECD 2007), as overbilling on the price of products, suppliers monopoly, bribes to public officials, etc. The goal of this paper is to compare the performance of Natural Language Processing (NLP) models in the task of detection - based on the extract of government purchasing - of companies that were already punished by government agencies, such as the Office of the Comptroller General (CGU). The used data is public and periodically updated through Federal Government Open Data’s page. The results of this paper show that it is possible to use the natural language models as an early step to investigate suspicious purchases, providing a classification of potentially problematic purchases and afterwards it can be.
|
|
|
5
|
-
Jéssica de Souza Santana
-
Competency Specification Process with Representation in the LOMc Metadata Standard
-
Advisor : LAIS DO NASCIMENTO SALVADOR
-
COMMITTEE MEMBERS :
-
ANA CONCEIÇÃO ALVES SANTIAGO
-
LAIS DO NASCIMENTO SALVADOR
-
VANINHA VIEIRA DOS SANTOS
-
Data: Jun 16, 2025
-
-
Show Abstract
-
The increasing adoption of competency-based curricula, such as the ACM/IEEE Computing Curricula 2020 (CC2020), has highlighted the need for structured competency descriptions in educational resources. This demand grows as pedagogical practices must align with curricular frameworks integrating knowledge, skills, and attitudes. In this context, this dissertation proposes the Competency Specification Process (CSP), a systematic methodological approach for specifying and annotating competencies in computing education tasks. The primary goal is to enhance clarity, reusability, and interoperability of educational data, supporting pedagogical planning and personalized learning. The research follows Design Science Research (DSR), structured in three interdependent cycles: the relevance cycle, identifying educational challenges; the rigor cycle, grounding the process in theory and defining requirements; and the design cycle, developing and evaluating the proposed solution. CSP builds on CC2020 principles and Bloom’s Taxonomy, applied to Problem-Based Learning (PBL) tasks previously used in Theory of Computation courses. For semantic representation of competencies, an extension to the Learning Object Metadata (LOM) standard—LOMc (LOM-Competence)—was proposed, formalized in RDF to ensure interoperability and reuse in digital environments. Evaluation involved interviews with Theory of Computation instructors, who assessed annotated tasks based on clarity, relevance, and reusability. Results indicate that CSP enables precise, contextualized competency formulation, improves alignment between tasks and learning objectives, and enhances educational resource curation and recommendation. Challenges include the need for manual annotation review, suggesting future integration with educational ontologies to automate and scale the process.
|
|
|
6
|
-
Vítor Alves Barbosa
-
Exact and Heuristic Approaches for the Pickup and Delivery Problem with Time Windows and Scheduling on the Edges, and for the Single-Machine Coupled Task Scheduling Problem with Exact Delays
-
Advisor : RAFAEL AUGUSTO DE MELO
-
COMMITTEE MEMBERS :
-
RAFAEL AUGUSTO DE MELO
-
THIAGO FERREIRA DE NORONHA
-
CELSO DA CRUZ CARNEIRO RIBEIRO
-
Data: Jul 21, 2025
-
-
Show Abstract
-
In this work, two optimization problems related to routing and scheduling are studied. The first study addresses the PDPTW-SE. The challenge lies in determining routes for a heterogeneous fleet of vehicles to transport requests with specific pickup and delivery locations, considering that some traversals require synchronized operations with machines. Since the number of machines is limited, their usage must be properly scheduled. The objective is to minimize the total completion time while respecting capacity, time window, and precedence constraints. To this end, a MIP formulation and a multistart heuristic with a linear programming-based improvement procedure are developed. A benchmark set of instances is also proposed, consisting of two families of instances representing different applications of the problem. In the computational experiments, the MIP formulation solves instances with up to twelve requests and finds feasible solutions for 93.40% of the cases. The heuristic, in turn, obtains feasible solutions for all instances, with quality equivalent to or better than that of the MIP formulation. The second study addresses the SMCTSP. This is a job scheduling problem in which each job consists of two coupled, non-preemptive tasks separated by an exact delay. The objective is to minimize the completion time of the last scheduled task. The problem was modeled using CP, and a BRKGA was developed, incorporating a warm-start solution generator, periodic restarts with varying intensities, and a local search algorithm. Computational experiments showed that the proposed BRKGA provides high-quality solutions with reduced computational times compared to the CP model. On the other hand, the CP model significantly outperformed the BRKGA solutions when run for one hour with multiple threads. Finally, the proposed approaches combined obtained new best solutions for 93.33% of the instances that had not yet been solved to optimality by previous approaches in the literature.
|
|
|
7
|
-
Victor Soares Cardel
-
A Systematic Review and Q-Learning-based Design of Scheduling Functions for 6TiSCH Networks.
-
Advisor : BRUNO PEREIRA DOS SANTOS
-
COMMITTEE MEMBERS :
-
PAULO HENRIQUE LOPES RETTORE
-
ALLAN EDGARD SILVA FREITAS
-
TATIANE NOGUEIRA RIOS
-
Data: Jul 24, 2025
Ata de defesa assinada:
-
-
Show Abstract
-
An IPv6 over the TSCH mode of IEEE 802.15.4e (6TiSCH) network provides IPv6 connectivity through IEEE 802.15.4 links governed by Time Slotted Channel Hopping (TSCH). TSCH is a medium access control for low-power and lossy networks, providing low energy consumption, high reliability, and deterministic latency through time-division multiplexing. To achieve this goal, 6TiSCH defines a component responsible for determining the best communication scheduling of devices, called an Scheduling Function (SF). The design and implementation of SFs, being context-dependent, is a current topic of study in the literature. Thus, many different scheduling functions were proposed, each with its particular trade-offs. Additionally, Artificial Intelligence (AI), in particular machine learning, emerges as a prominent tool for its capacity to promote adaptability and flexibility. Although previous works have proposed comparisons of different scheduling strategies, the systematization of AI algorithms for 6TiSCH has not been explored in detail. This work proposes such a review, presenting an analysis of the current state of AI-based scheduling methods. Additionally, this work advances the state of the art by presenting, evaluating, and comparing two new Q-learning SFs with the current state of the art of SFs for 6TiSCH. The experimental results show the promising potential of the proposed approaches.
|
|
|
8
|
-
Ricardo Gomes de Oliveira
-
EVALUATING GRAMMAR PATTERNS ON TRANSFORMERS FOR THE PORTUGUESE LANGUAGE: A CASE STUDY ON ATTENTION HEADS.
-
Advisor : DANIELA BARREIRO CLARO
-
COMMITTEE MEMBERS :
-
Aline Marins Paes Carvalho
-
DANIELA BARREIRO CLARO
-
MARLO VIEIRA DOS SANTOS E SOUZA
-
Data: Jul 30, 2025
-
-
Show Abstract
-
The advancement of natural language models has been marked by the transition from rule-based approaches and statistical methods to deep neural architectures, such as the
Transformer, which enable the modeling of contextual dependencies in texts in a distri- buted manner. This study investigates the capacity of a monolingual BERT-based model,
trained on Brazilian Portuguese data, to represent syntactic governor→dependent relati- ons, as described within the Universal Dependencies (UD) framework.
To conduct the analysis, we used the annotated UD Portuguese-Bosque corpus, from which we extracted sentences containing diverse grammatical patterns, including verbal
transitivity, passive voice, reflexive pronouns, subject predicatives, and subordinate clau- ses. The sentences were processed by the model, and attention values were extracted per
layer and head, aiming to identify alignments between attention weights and the syntactic dependencies recorded in the corpus. The model’s tokenizer was used alongside lexical tracing mechanisms that allow associating subtokens with their respective positions in the original texts, enabling an interpretable analysis of syntactic pairs. The evaluation relied on metrics such as grammatical pattern-wise accuracy, attention
distribution entropy, and the Undirected Unlabeled Attachment Score (UUAS). A compo- site metric combining selectivity and structural adherence was also applied. The results
reveal that certain attention heads exhibit systematic activation patterns concerning spe- cific dependencies. Notably, head 3 in layer 2 consistently aligned with relations between
verbal nuclei and their arguments, representing an example of emerging functional speci- alization. These findings contribute to the understanding of the internal attention mecha- nisms of Transformer-based models applied to Brazilian Portuguese and provide insights
for future approaches in supervised compression and automated linguistic analysis.
|
|
|
9
|
-
LUCAS MASCARENHAS ALMEIDA
-
Tetris: An SLA-aware Application Placement Strategy in the Edge-Cloud Continuum
-
Advisor : MAYCON LEONE MACIEL PEIXOTO
-
COMMITTEE MEMBERS :
-
CARLOS HENRIQUE GOMES FERREIRA
-
GERALDO PEREIRA ROCHA FILHO
-
MAYCON LEONE MACIEL PEIXOTO
-
Data: Sep 9, 2025
-
-
Show Abstract
-
An Edge-Cloud Continuum integrates edge and cloud resources to deliver a flexible and scalable infrastructure. While this paradigm reduces latency and enhances scalability, the heterogeneous nature of these environments introduces challenges such as resource fragmentation and inefficient placement. Many existing approaches assume high resource availability and homogeneous infrastructures, overlooking fragmentation, application drops, and the interplay between performance metrics. This work introduces Tetris, an edge-cloud continuum application placement strategy inspired by the iconic puzzle game, designed to operate in a heterogeneous and complex edge–cloud continuum environment. It prioritizes tasks based on urgency, workload diversity, and resource availability. Compared to state-of-the-art approaches, Tetris reduces latency SLA violations by approximately 75%, while maintaining zero drop occurrences. Unlike proximity-based methods that focus on placing tasks near users, Tetris’s key strength lies in avoiding resource fragmentation, simplifying the placement problem, and improving overall system resilience. Additionally, clustering analysis reveals strong correlations between fragmentation and service degradation, reinforcing the importance of balanced resource allocation. These findings contribute to the development of more resilient, efficient, and explainable edge–cloud systems, with improved QoS and QoE for end-users.
|
|
|
10
|
-
MARCELO PEREIRA BARBOSA
-
Trust Engineering: Software Requirements for Strengthening Connections among Students in Virtual Learning Environments
-
Advisor : RITA SUZANA PITANGUEIRA MACIEL
-
COMMITTEE MEMBERS :
-
SEAN SIQUEIRA
-
LAIS DO NASCIMENTO SALVADOR
-
RITA SUZANA PITANGUEIRA MACIEL
-
Data: Sep 15, 2025
-
-
Show Abstract
-
Interpersonal trust among students is essential for the success of collaborative activities in Virtual Learning Environments (VLEs), but establishing it is challenging due to the absence of physical presence and the limitations of computer-mediated communication. Lack of trust can compromise knowledge sharing, motivation, engagement, and student retention in courses. In order to support the development of interpersonal trust among students in VLEs, this research aimed to identify software requirements for features that promote trust. To this end, a Systematic Mapping Study (SMS) was conducted to understand how trust has been conceptualized in the literature, as well as the intrinsic aspects of this phenomenon. Next, attributes and characteristics that influence trust among students were analyzed across four evolutionary phases: acquire, maintain, loose, and restore trust. As a result, 37 attributes were identified, most of which originated from other domains but can influence the dynamics of trust among students in VLEs. However, the evidence analyzed focused mainly on the trust acquire phase, while the loose phase was little explored. The SMS revealed significant gaps in the field of trust among students, highlighting, in particular, the absence of characteristics associated with maintain and restore trust. In this sense, it became clear that there was a need to explore new characteristics based on the students' own perceptions of the phases of acquire and lose trust, as well as to understand, from their perspective, the factors that influence the maintain and restore of trust. Based on the gaps identified, an exploratory study was conducted through an online survey with 170 students, which aimed primarily to understand which personal and behavioral characteristics can influence trust in the four evolutionary phases. A total of 248 characteristics were identified that can influence trust, with 96 in the acquire phase, 78 in the loose phase, and 37 characteristics in each of the maintain and restore phases. Based on these results, 26 software requirements were identified for the following features: student trust profile, collaborative work groups, trusted peer recommendation systems, peer evaluation, and friend network. In addition, a platform-independent conceptual model was developed that illustrates how the features can be integrated during the development process of VLEs that aim to promote trust. Finally, the software requirements were validated with stakeholders using a high-fidelity mockup that illustrated the software requirements in the intended features. The main artifacts generated from this research were: a conceptual map based on definitions from the literature, a conceptual model with attributes by evolutionary phase, cataloging of characteristics identified in the literature and by students, software requirements, and a conceptual model with features and requirements aimed at promoting trust.
|
|
|
11
|
-
FABIO SANTOS DOS SANTOS
-
Exploring Probabilistic Data Structures for Multi-Path Routing Optimization in Named Data Networks
-
Advisor : LEOBINO NASCIMENTO SAMPAIO
-
COMMITTEE MEMBERS :
-
Antonio Augusto de Aragão Rocha
-
BRUNO PEREIRA DOS SANTOS
-
LEOBINO NASCIMENTO SAMPAIO
-
Data: Sep 15, 2025
-
-
Show Abstract
-
Routing protocols are essential for the accurate discovery of reachability information in Named Data Networks (NDN). However, selecting the most suitable routing protocol must take into account the network topology, as different protocols exhibit distinct performance characteristics depending on the topology. In disruptive scenarios, networks with multiple paths that rely on Link-State-based protocols, such as the Named Data Link State Routing Protocol (NLSR), face serious limitations. These limitations stem from the need to synchronize state information across all nodes and compute all routes at each node to maintain consistent topological information. On the other hand, Distance Vector-based protocols offer a simpler synchronization process due to their distributed and asynchronous nature. Nevertheless, their simplified discovery mechanism struggles to efficiently handle the combination of multipath routing and ring-shaped topologies, which leads to inconsistent routes, resulting in unsatisfied interests and reduced throughput. In this work, we propose enhancing the simplified discovery mechanism of distance vector protocols by incorporating probabilistic data structures. Our approach uses these structures to create a probabilistic path vector that enables the detection of inconsistent routes, allowing their elimination and the selection of optimal paths. As a result, we develop a distributed and asynchronous protocol that maintains control over the effectiveness of multipath routing. We evaluate our proposal by comparing the new protocol with other distance vector and link-state protocols across various topologies with different numbers of nodes, including emulation in real-world topologies under multiple failure scenarios. The emulation results demonstrate that the proposed solution achieves a higher NDN packet delivery rate and a significant reduction in unsatisfied interests, proving to be a more effective approach.
|
|
|
12
|
-
TANIA MARIA FEITOSA
-
HYBRIDIZATION IN SOFTWARE DEVELOPMENT WITH SCRUM: A QUALITATIVE STUDY WITH RECOMMENDATIONS
-
Advisor : RITA SUZANA PITANGUEIRA MACIEL
-
COMMITTEE MEMBERS :
-
DAVI VIANA DOS SANTOS
-
IVAN DO CARMO MACHADO
-
RITA SUZANA PITANGUEIRA MACIEL
-
Data: Sep 18, 2025
-
-
Show Abstract
-
The software industry faces constant challenges in adapting to rapid market changes,
meeting the growing demands of customers, and maintaining the productivity of development
teams. In this context, there is increasing interest in hybrid development approaches
that combine agile methods with plan-driven approaches. Due to its iterative,
incremental, and adaptable nature, Scrum stands out as a frequent foundation for building
hybrid processes. However, difficulties persist in formulating and effectively adopting
these approaches, which can compromise productivity, product quality, and stakeholder
satisfaction.
This dissertation aimed to investigate, characterize, and analyze the use of hybrid
processes in the software industry, particularly those involving Scrum, from the perspective
of professionals working in development. An empirical study was conducted with 158
Information Technology professionals worldwide, including 143 survey participants and
15 respondents in semi-structured interviews. Data analysis enabled the identification of
practices, methodological combinations, motivations for adoption, challenges faced, and
strategies used to overcome them.
The results indicate that hybridization is largely driven by the pursuit of flexibility,
shorter delivery times, higher quality, risk mitigation, and greater project control. The
most recurrent combinations were ScrumBan and Water-Scrum-Fall. Reported challenges
included defining which practices to combine, adapting teams, resistance to change,
and the lack of consolidated guidelines to guide adoption. Professionals’ perceptions reveal
that the practical application of hybridization does not always align with academic
definitions, highlighting a gap between theory and practice.
Based on these findings, 34 recommendations were formulated and organized into
human, technical, and organizational categories, aiming to support the adjustment and
improvement of software development processes. These recommendations are expected
to contribute to building more effective processes tailored to the needs of each project,
helping to overcome common obstacles to the adoption of hybrid methods.
By sharing these findings, this dissertation seeks to expand knowledge about software
process hybridization, promote a clearer understanding of its benefits and challenges,
and encourage more efficient practices in the industry, with positive impacts on project
productivity and quality.
|
|
|
13
|
-
SANDRO DE CARVALHO FRANCO
-
Ontovid II: A Semantic Knowledge Graph-based Solution for Public Health Data Integration
-
Advisor : LAIS DO NASCIMENTO SALVADOR
-
COMMITTEE MEMBERS :
-
RENATA WASSERMANN
-
LAIS DO NASCIMENTO SALVADOR
-
ROBESPIERRE DANTAS DA ROCHA PITA
-
Data: Nov 5, 2025
-
-
Show Abstract
-
The complexity of the information systems within Brazil’s Unified Health System (SUS), characterized by diverse and disconnected databases, poses a major challenge for public health managers. There is a clear need for solutions that provide integrated access to distinct data sources through specialized queries. This dissertation presents the development of Ontovid II, a solution based on Semantic Knowledge Graphs (SKG) for the integration of public health data. Ontovid II employs ontologies organized into three layers—source, integration, and domain—to enable inferences and unified queries over data from the Live Birth Information System (SINASC), the Mortality Information System (SIM), and the Ministry of Health’s Notification System (e-SUS Notifica), including notifications related to COVID-19. One of the solution’s objectives is to support health managers in analyzing essential indicators such as mortality, immunization, hospitalizations, and notifications. The approach was validated by managers from the Municipal Health Department of Cama¸cari, Bahia (Brazil), demonstrating its effectiveness in extracting relevant information to support decision-making.
|
|
|
14
|
-
DHYEGO TAVARES MOREIRA DA CRUZ
-
Effects of Music on Brain Activity and Performance of Software Testing Professionals: An Experimental Study with EEG
-
Advisor : EDUARDO SANTANA DE ALMEIDA
-
COMMITTEE MEMBERS :
-
EDUARDO SANTANA DE ALMEIDA
-
FERNANDA MADEIRAL DELFIM
-
PIERRE YVES FRANCOIS MARIE JOSEPH SCHOBBENS
-
Data: Nov 7, 2025
-
-
Show Abstract
-
The practice of listening to music is widely adopted by software engineering professionals to improve concentration and attenuate environmental noise. However, the neurophysiological and behavioral impact of this practice on software testers, a group with distinct cognitive demands, remains largely unexplored. This dissertation investigates the influence of music on the performance (accuracy) and neurophysiological activity of professional software testers during the execution of testing tasks. We conducted a controlled experiment with 14 professionals, divided into an experimental group (exposed to Lo-Fi music) and a control group (in silence). The participants performed four blocks of distinct tasks: test code comprehension, syntax error identification, logic error identification, and test case creation. Brain activity was analyzed using Electroencephalography (EEG), focusing on metrics such as Power Spectral Density (PSD) and Event-Related Desynchronization/Synchronization (ERD/ERS). The results indicate that the musicexposed group achieved higher accuracy in the analytical tasks of comprehension, syntax detection, and logic detection. However, the experimental group exhibited slightly lower performance on the generative task of test case creation. The music group also exhibited more stable EEG activity, notably in the Beta and Delta bands, associated with attention and internal cognitive processing. Furthermore, they exhibited a more pronounced Alpha desynchronization (ERD) during the analytical tasks, suggesting greater concentration. Conversely, in the generative task, the music group demonstrated Alpha synchronization (ERS), indicating a more relaxed state and lower focus. This study, pioneering the application of EEG to professional software testers, provides objective evidence that the impact of music on performance is task-dependent. Instrumental music seems to act as a cognitive enhancer for analytical tasks but may not be beneficial for generative tasks. The contributions include a new methodological foundation for the field and a public dataset for replication and future investigations.
|
|
|
15
|
-
MARCOS VINÍCIUS QUEIROZ DE SANT'ANA FILHO
-
Paperman – A Scientific Article Recommendation System
-
Advisor : FREDERICO ARAUJO DURAO
-
COMMITTEE MEMBERS :
-
DANILO BARBOSA COIMBRA
-
FREDERICO ARAUJO DURAO
-
RENATO LIMA NOVAIS
-
Data: Nov 27, 2025
-
-
Show Abstract
-
The search for references and related work in scientific research can be exhausting, consuming an average of 4 hours per week for researchers. The abundance of sources and repositories makes it even more challenging to validate the veracity and reliability of these works, resulting in the disposal of half of the collected samples and negatively impacting productivity. Considering this scenario, this study aims to develop a platform that facilitates the initial stages of research through recommendation systems, models based on the researcher's profile, and data post-processing. The proposed system, Paperman, employs natural language processing and machine learning techniques to analyze researchers' publication histories and generate personalized recommendations for scientific articles. The system architecture includes an API for data collection and processing, as well as integrations with external services such as ORCID and DBLP, and a browser extension that presents recommendations in an intuitive manner. Experimental results demonstrate the system's effectiveness, with metrics such as an MRR of 0.8 and an nDCG@5 of 0.9407, indicating the high relevance of the generated recommendations. The study contributes to the field of educational recommendation systems, offering a practical solution to optimize the literature review process and discovery of related works in scientific research. The Paperman system addresses common challenges in academic research, such as information overload and the need for efficient discovery of relevant publications, by leveraging the researcher's profile and history to provide tailored recommendations.
|
|
|
16
|
-
SILVIO JOSÉ DE QUEIROZ PEREIRA
-
ChainID: A Blockchain-Based Platform for Decentralized Identity Management
-
Advisor : LEOBINO NASCIMENTO SAMPAIO
-
COMMITTEE MEMBERS :
-
GLAUBER DIAS GONCALVES
-
ALLAN EDGARD SILVA FREITAS
-
LEOBINO NASCIMENTO SAMPAIO
-
Data: Dec 10, 2025
-
-
Show Abstract
-
Identity is essential for recognizing entities (individuals, things, and organizations) and their various relationships within their contextual environment. Identity Management (IdM) involves processes such as authentication, authorization, accountability, and auditing, traditionally carried out by centralized or federated systems. However, these models limit privacy, interoperability, and user control over personal data. In this context, Decentralized Digital Identities (DDIs) emerge, in which individuals themselves hold and manage their own identity—now unique, portable, and securely shareable. This work presents ChainID, a platform for decentralized identity management based on blockchain, developed within the RNP (Brazilian National Education and Research Network) Working Group GT-ChainID. The platform adopts a service-oriented approach to enable the creation, issuance, verification, and revocation of decentralized identifiers (DIDs) and verifiable credentials (VCs), abstracting the complexity of the standards, protocols, and distributed infrastructure involved in this new digital identity paradigm. ChainID was initially built on the Hyperledger Indy blockchain, which is oriented toward privacy and credential management. As the solution’s requirements and architecture evolved, the platform migrated to the Hyperledger Besu blockchain, compatible with the Ethereum Virtual Machine (EVM). This transition allowed compatibility with the RNP Testbed experimentation environment, improved operational capabilities, and opened possibilities for future integration with the Brazil Blockchain Network (RBB). The solution provides a RESTful API, support for asynchronous events, integration with protocols such as SAML2 and CAS authentication, as well as dedicated interfaces, including the ChainID Console for configuration and administrative management, and the ChainID Wallet for individual control of user DIDs and VCs. As proofs of concept, two use cases were implemented: (1) the ChainID federated authentication component, demonstrating its applicability in educational scenarios using Moodle and the feasibility of replacing centralized infrastructures with DDI-based solutions without compromising security, privacy, or interoperability; and (2) the CarbonID project, which explores the issuance, validation, and traceability of sustainability certificates using DIDs and smart contracts, showcasing the platform’s potential for adoption in other applications and domains, including environmental, social, and corporate contexts.
|
|
|
17
|
-
EDLANE CRISTINE DOS SANTOS PROENCIA
-
Organizational Interoperability in the Context of Information Systems: A Collaborative Approach
-
Advisor : RITA SUZANA PITANGUEIRA MACIEL
-
COMMITTEE MEMBERS :
-
CELIA GHEDINI RALHA
-
RITA SUZANA PITANGUEIRA MACIEL
-
VALDEMAR GRACIANO NETO
-
Data: Dec 15, 2025
-
-
Show Abstract
-
The interaction among independent, heterogeneous, and dynamic systems in Systems of Information Systems (SoIS) imposes significant challenges, especially at the level of Or- ganizational Interoperability. To achieve this interoperability, aligning business processes across organizations is fundamental. Business Process Management (BPM) approaches are essential tools for this alignment, but existing solutions fail in collaborative contexts where participants can join and leave at any time, without central management. Given this gap, the main objective of this work was to develop a BPM-based solution to sup- port Organizational Interoperability in collaborative SoIS. The main contribution was the specification of a metamodel that provides a management structure. This metamo- del enables modeling collaborative business processes and addresses the entry and exit dynamics of participants by formalizing the explicit separation between Role (the abs- tract function responsible for the process) and Participant (the concrete and dynamic organizational instance that executes it). Additionally, the solution allows the definition of the Interoperability Links (technical and sociotechnical) required for interactions and communications between roles, ensuring that collaboration requirements are explicit. The solution’s applicability was evaluated through its implementation in a semi-automated si- mulation environment instantiated in three domains (e-commerce, health plan, and public security). The results validated the approach, demonstrating adaptability upon the entry of new participants and resilience upon exit, through Role-based rerouting. It is conclu- ded that the solution offers a viable model to manage participant dynamics, providing the transparency and resilience mechanisms necessary for Organizational Interoperability in collaborative SoIS.
|
|
|
18
|
-
JUVENAL BRUNO ANDRADE DA SILVA
-
MoTSPPP: Multi-objective Traveling Salesman Problem with Profits and Passengers
-
Advisor : ISLAME FELIPE DA COSTA FERNANDES
-
COMMITTEE MEMBERS :
-
GUSTAVO DE ARAUJO SABRY
-
ISLAME FELIPE DA COSTA FERNANDES
-
RAFAEL AUGUSTO DE MELO
-
Data: Dec 17, 2025
-
-
Show Abstract
-
Ridesharing systems have emerged as a potential solution to urban mobility challenges, promoting collaborative vehicle usage and route optimization. These systems require efficient routing algorithms to balance conflicting objective functions such as travel cost, travel time, and driver bonuses. Previous studies have modeled such systems using the Traveling Salesman Problem with Profits (TSPP), where the driver shares the vehicle with passengers and minimizes travel cost. The Bi-objective Traveling Salesman Problem (BiTSP) has also been investigated in prior work, but it ignores passenger and bonus collection. Consequently, the literature lacks a multi-objective formulation that captures the real-world trade-offs among such objective functions. This work introduces the Multi-objective Traveling Salesman Problem with Profits and Passengers (MoTSPPP), an NP-hard optimization problem that minimizes travel cost and time while maximizing bonus collection. A mathematical formulation and a proof of NP-hardness are provided. Eight algorithms are investigated: an exact solver, three naïve heuristics, and four evolutionary metaheuristics (NSGA-II, MOEA/D, IBEA, and SPEA2). A comprehensive experimental study is conducted on 252 benchmark instances, comprising symmetric and asymmetric graphs with varying edge-weight correlations. Supported by statistical tests, performance evaluation concerns processing time, solution quality, and solution diversity. Results demonstrate that the MoTSPPP is computationally more challenging than the TSPP and BiTSP, and that metaheuristic approaches yield significantly better results than naïve heuristics.
|
|
|
19
|
-
MATHEUS AUGUSTO OLIVEIRA DOS SANTOS
-
Mapeamento Linear Prototípico a partir de Modelos Fundamentais de Visão para Recuperação de Imagens Histopatológicas
-
Advisor : LUCIANO REBOUCAS DE OLIVEIRA
-
COMMITTEE MEMBERS :
-
ANGELO AMANCIO DUARTE
-
JEFFERSON FONTINELE DA SILVA
-
LUCIANO REBOUCAS DE OLIVEIRA
-
Data: Dec 18, 2025
-
-
Show Abstract
-
The large-scale digitization of high-resolution histological slides has consolidated digital pathology as a data-driven field, but has also introduced substantial challenges related to storage, annotation, and efficient retrieval of gigapixel-scale images. Content-Based Medical Image Retrieval (CBMIR) systems offer a solution to this scenario by retrieving visually or semantically similar samples directly from morphological content, without exclusive reliance on metadata. Despite recent advances in contrastive learning, current CBMIR methods still require large volumes of labeled data and remain sensitive to domain variations, including differences in staining, tissue preparation, and morphology. Self-supervised Transformer-based models have led to the emergence of visual foundation models (FMs), which learn transferable representations from large collections of unlabeled images. In the context of digital pathology, specialized FMs such as UNI, Virchow, and Phikon have emerged, trained directly on massive collections of histological slides and capable of capturing complex morphological patterns across multiple tissues. However, their potential for medical image retrieval remains underexplored, and their embeddings are not explicitly optimized to represent the fine-grained morphological continuities required in CBMIR. This dissertation investigates the linear mapping of foundation models for histopathological image retrieval, proposing a lightweight transfer scheme based on few-shot learning and prototypes. The method projects pre-trained embeddings into a retrieval-oriented latent subspace, imposing a prototype-centered metric alignment that enhances intraclass compactness and inter-class separability, while preserving the global semantic structure of the FM. Evaluations on three biomedical datasets—renal glomeruli, ovarian cancer histology, and dermoscopic skin lesions — demonstrate improvements exceeding 10 percentage points in mean average precision at K (MAP@K) compared to non-adapted FMs. Permutation tests confirm the statistical significance of these gains, while qualitative analyses reveal more structured, coherent, and diagnostically consistent embeddings. The proposed approach aims to bring general-purpose visual foundation models closer to the specific demands of CBMIR in digital pathology, offering a solution focused on competitive performance, efficient retrieval, and semantically coherent representations.
|
|
|
Thesis |
|
|
1
|
-
Tadeu Nogueira Costa de Andrade
-
Statistical and Computational Inteligency Methods for Timing Analysis in Real-Time Systems
-
Advisor : GEORGE MARCONI DE ARAUJO LIMA
-
COMMITTEE MEMBERS :
-
GIOVANI GRACIOLI
-
ALLAN EDGARD SILVA FREITAS
-
GEORGE MARCONI DE ARAUJO LIMA
-
KONSTANTINOS BLETSAS
-
MAYCON LEONE MACIEL PEIXOTO
-
Data: Mar 27, 2025
-
-
Show Abstract
-
Real-time systems (RTS) are composed of a set of tasks (code segments) that are recurrently launched to be executed and must meet deadlines. Designing such a system in a provably correct manner requires information about the worst-case execution time (WCET) for each of its tasks. However, estimating the WCET is becoming increasingly challenging due to the high complexity of hardware and software in modern platforms. This has motivated the use of techniques to derive the probabilistic worst-case execution time (pWCET). Most existing approaches rely on measuring the execution time of system tasks on the target platform. As measurements are taken during the design time, collected samples may lead to unreliable estimates (due to possible measurement bias) or non-representative ones (due to difficulties in reproducing operational conditions). The need to make samples compatible with the assumptions of statistical modeling is an additional source of difficulty. Given the complexities presented, two studies with distinct objectives were developed. In the first study, a representation of execution time is performed based on hardware events, considering different computational intelligence tools. Specifically, for a program under analysis, it is shown that the execution time T(n) per number n of executed instructions can be correlated with occurrences of hardware-related events. In the second study, a new approach for Measurement-Based Probabilistic Timing Analysis (MBPTA) is presented. Unlike the usual MBPTA, which considers only T(n) as the variable of interest, this new approach incorporates a variable of interest that considers both T(n) and n. Using tuples (n, T(n)) for different values of n allows for exploring multiple execution paths. Additionally, this new approach allows the set of measurements to be evaluated and improved. For this purpose, deep neural networks (DNN) were employed. Since the measurements are considered representative, it is possible to estimate probabilistic bounds on the execution time. Experimental results indicate a difference of up to 30% between the estimates obtained using samples refined by the proposed approach and those obtained using non-refined samples. The approaches are evaluated considering different hardware and program models, and the results obtained demonstrate effectiveness in the proposed studies.
|
|
|
2
|
-
LARISSA BARBOSA LEONCIO PINHEIRO
-
ORGANIZING THE TD MANAGEMENT LANDSCAPE FOR REQUIREMENTS DEBT: TECHNICAL AND HUMAN ASPECTS
-
Advisor : RITA SUZANA PITANGUEIRA MACIEL
-
COMMITTEE MEMBERS :
-
CLAUDIO NOGUEIRA SANT ANNA
-
JULIO CESAR SAMPAIO DO PRADO LEITE
-
MANOEL GOMES DE MENDONCA NETO
-
RITA SUZANA PITANGUEIRA MACIEL
-
UIRÁ KULESZA
-
Data: May 16, 2025
-
-
Show Abstract
-
Context: Technical debt (TD) contextualizes the problem of pending software development
tasks as a type of debt that brings a short-term benefit to the project, often in
terms of increased development speed or shortened time to market. TD items can affect
different artifacts and phases of the software development. It is particularly important
to discuss the management of TD in the context of requirements engineering (RE) activities
because they are inherently complex, reflect a system purpose aligning different
viewpoints of the system’s stakeholder, and impact several software development phases.
There are two types of debt directly related to RE: requirements and documentation debt.
Although several works have investigated the state-of-the-practice on TD concerning its
causes, effects, and management, the current literature has not approached the topic
under the perspective of requirements and requirements documentation debt (R2DD).
Aims: This Ph.D dissertation aims to organize the TD management landscape for
requirements debt in the technical and human aspects.
Method: Initially, we conducted a literature review on the current state of research
on R2DD and causes, effects, and practices used for its prevention and repayment. Then,
we analyzed data collected by replication teams from the InsighTD project, which is a
family of globally distributed surveys on the causes, effects and management of TD. From
the body of knowledge resulted from the analysis of InsighTD data, we perceived that
the human factor is important when dealing with R2DD, consequently, we investigated
the great and the less desirable attributes of requirements engineers and the relationship
between them. Based on this investigations, we defined four concept maps.
Results: This work presents the state of practice of R2DD, revealing its causes, effects,
and practices used for its prevention and repayment. Regarding causes of R2DD,
deadline, not effective project management, change of requirements, inappropriate planning,
and high turnover of the team are among the five most cited to incurr R2DD.
Considering effects of R2DD, the five most cited are: delivery delay, rework, financial
loss, low external quality, and low maintainability. well-defined requirements, following
the project planning, following well-defined project process, well-define scope statement,
and good allocation of resources in the team are the five commonly cited practices for preventing
R2DD item, while lack of qualified professionals, non-update documentation, and
short deadline are the reasons for explaining the non-prevention of R2DD. About practices
for repayment R2DD items, code refactoring, monitoring and controlling project
activities, design refactoring, investing effort on TD repayment activities, and changing
project scope are among the five most cited, while focusing on short term goals, lack of
organizational interest, lack of resources
resources, cost, and team overload are the reasons for explaining
the non-repayment of R2DD. Considering the investigation of the great and the
less desirable attributes of requirements engineers, the attributes investigative ability to
talk to stakeholders, judicious, understand the business, good ability to identify missing
requirements, and good knowledge of requirements engineering practices are the five most
cited attributes for great requirements engineers, while difficulty in relationships, lack
of communication, lack of business knowledge, make superficial specifications (without
details, with inconsistencies, ambiguities - making the team’s work difficult, and lack of
organization are the most cited less desirable attributes.
Conclusion: Using the InsighTD data, initially this work explores the state of
practice of R2DD, on causes, effects, and practices used for its prevention and repayment.
After analyzing these data, and given that the human factor is important when dealing
with R2DD, it also explores the great and the less desirable attributes of requirements
engineers and the relationship between them. All body of knowledge was organized
into four artifacts that can drive new investigations on R2DD and support software
practitioners in increasing their capabilities.
|
|
|
3
|
-
Diego Corrêa da Silva
-
Exploiting Calibration as a Multi-Objective Recommender System
-
Advisor : FREDERICO ARAUJO DURAO
-
COMMITTEE MEMBERS :
-
ADRIANO CÉSAR MACHADO PEREIRA
-
BRUNO PEREIRA DOS SANTOS
-
FREDERICO ARAUJO DURAO
-
MARCELO GARCIA MANZATO
-
RODRIGO ROCHA GOMES E SOUZA
-
Data: Jun 18, 2025
-
-
Show Abstract
-
Collaborative Recommender Systems generate personalized recommendations by analyzing users' past interactions. However, traditional approaches often prioritize relevance, leading to issues such as super-specialization, popularity bias, and class imbalance. These limitations can result in recommendation lists that fail to represent the full spectrum of a user’s interests fairly. In this sense, Calibrated Recommendations address this problem by balancing relevance with fairness (calibration), ensuring that the distribution of recommended items aligns more closely with the user’s preference distribution. For example, when the user's profile comprises 80% Adventure and 20% Sci-fi, the calibrated recommendation seeks to generate a list following this distribution. Relevance and calibration are two distinct goals that the system should achieve. This multi-objective is reached through a trade-off balance approach. This thesis addresses calibrated recommendations as a multi-objective recommendation system, aiming to measure and improve the calibration of the recommendation list using the user's preferences as a target. Thus, we divide the goals of this thesis into studies, and inside each study, research questions were raised and answered. In the first study, we systematically benchmark 57 fairness measures, introducing novel methods for extracting user preference distributions and refining relevance estimation. As a result, four measures achieve the same four best performances. In the second study, we explored the broader impact of calibration on key recommendation objectives, including novelty, coverage, personalization, unexpectedness, and serendipity. Our findings indicate that calibration enhances item coverage and personalization while maintaining high recommendation utility. In the third study, we investigate the structural properties of the distributions used in calibrated recommendations. Unlike traditional recommender systems that operate in a one-dimensional space, calibrated recommendations involve high-dimensional user preference distributions. Our analysis shows that calibrated recommendation lists naturally form distinct user clusters, a phenomenon best understood through outlier detection models. In the fourth study, we propose two novel approaches for modeling user preferences to enhance the accuracy and adaptability of calibration techniques. The first method incorporates time-sensitive weighting to discount outdated preference information. The second method introduces an entropy-based approach to better capture user preferences in domains where item features are set-valued, such as movies with multiple genres. Experimental evaluations confirm that these approaches effectively reduce miscalibration while maintaining recommendation accuracy. Overall, this thesis advances the field of calibrated recommendations by providing a comprehensive evaluation of fairness measures, proposing novel calibration techniques, and analyzing the structural properties of user preference distributions.
|
|
|
4
|
-
DIEGO ZABOT
-
Game codesign as a pedagogical strategy for Early Childhood Education: a proposal inspired by semi-participatory interaction design
-
Advisor : ECIVALDO DE SOUZA MATOS
-
COMMITTEE MEMBERS :
-
ANDRE LUIS SOUSA SENA
-
DÉBORA NICE FERRARI BARBOSA
-
ECIVALDO DE SOUZA MATOS
-
RODRIGO ROCHA GOMES E SOUZA
-
TACIANA PONTUAL DA ROCHA FALCAO
-
Data: Jul 15, 2025
-
-
Show Abstract
-
Faced with the challenges of the contemporary world, there is an increasing demand for pedagogical practices that foster contemporary skills such as creativity, critical thinking, autonomy, and collaboration from the earliest years of schooling — something that can be enhanced through Applied Computing. The Brazilian National Common Curricular Base (BNCC) reinforces this commitment by proposing, in Early Childhood Education, experiences that promote the construction of meaning, personal expression, and social interaction within the Fields of Experience (Campos de Experiência), through elements related to Computing. In this context, this research investigated the practice of Game Codesign, inspired by Interaction Design, as a pedagogical strategy aimed at Early Childhood Education, focusing on the development of contemporary skills through the BNCC's Fields of Experience. The proposal is based on a semioparticipatory interaction design approach (SPIDe), which articulates principles from Semiotic Engineering and Participatory Design, fostering the involvement of children as coauthors in the creation of games and play activities, valuing their multiple languages and forms of expression. The methodology included a narrative literature review, theoretical analyses, and deductive (modeling) stages for the conception and adaptation of the strategy to Early Childhood Education. In addition, the proposal was evaluated by experts in the fields of Human-Computer Interaction, Game Design, and Early Childhood Education. These procedures supported the interactive construction of a Game Codesign proposal structured into six stages, conceived as a pedagogical strategy aligned with the Fields of Experience of the BNCC.
|
|
|
5
|
-
Lidiany Cerqueira Santos
-
Empathy-Guided Software Development: A Conceptual Framework of Empathy in Software Engineering
-
Advisor : MANOEL GOMES DE MENDONCA NETO
-
COMMITTEE MEMBERS :
-
BIANCA TRINKENREICH
-
KIEV SANTOS DA GAMA
-
MANOEL GOMES DE MENDONCA NETO
-
RENATO LIMA NOVAIS
-
THIAGO SOUTO MENDES
-
Data: Jul 21, 2025
-
-
Show Abstract
-
Context. Empathy is the ability to understand and share the emotions of others, a critical skill for software practitioners as it contributes to improved software quality, communication, collaboration, and work environments. Despite its importance, empathy remains an underexplored topic in Software Engineering (SE) research.
Aims. To address this knowledge gap, this dissertation aims to deepen our understanding of empathy in SE by investigating how it is defined, practiced, and experienced by software practitioners, as well as identifying barriers and effects related to empathetic behavior in development contexts.
Method. We adopted a mixed-methods approach. First, we conducted a qualitative analysis of grey literature from practitioner platforms (DEV and Medium). Then, we surveyed software practitioners within a large software organization to quantitatively and qualitatively examine empathy-related perceptions and behaviors. We also evaluated our proposed framework with empathy experts.
Results. The study revealed different meanings and a high perceived value of empathy, as well as barriers that hinder its application in software teams. We identified a set of empathetic practices and categorized them into empathy dimensions based on exploratory factor analysis. We also found that empathy practices are closely linked to positive outcomes, ranging from productivity and technical quality to collaboration, well-being, and professional growth, reinforcing the practical relevance of empathy across multiple dimensions of software practice. These insights informed the development of a conceptual framework of empathy in SE, which was validated through experts' feedback.
Conclusion. This research advances the understanding of empathy in software engineering by offering a theoretically and empirically grounded framework, a curated dataset, and practical implications for teams and organizations. It lays the foundation for future work on empathy-driven practices, tools, and interventions in software development.
|
|
|
6
|
-
Bruno Souza Cabral
-
Evolving Open Information Extraction for Portuguese employing Language Models
-
Advisor : DANIELA BARREIRO CLARO
-
COMMITTEE MEMBERS :
-
MARCOS GARCÍA GONZÁLEZ
-
Aline Marins Paes Carvalho
-
DANIELA BARREIRO CLARO
-
RENATA VIEIRA
-
VLÁDIA CÉLIA MONTEIRO PINHEIRO
-
Data: Sep 15, 2025
-
-
Show Abstract
-
Open Information Extraction (OpenIE) is an important task in Computer Science focused on extracting structured information from text, typically as (argument 1, relation, argument 2) triples, without requiring predefined target relations. OpenIE aims to extract valuable information for uses such as enhancing language understanding, populating knowledge bases, and text comprehension. The extraction of OpenIE relations from Portuguese text presents substantial challenges, primarily due to its rich morphology, frequent use of clitic pronouns, exible word order, inected nature, and other linguistic peculiarities. Deep Learning has signicantly advanced OpenIE for the English language, with sequence labeling being a common approach. Recently, a new approach, Generative Information Extraction, particularly leveraging generative Large Language Models (LLMs), has emerged as a fruitful alternative. Generative techniques can take a sentence as input and generate structured semantic representations. Despite numerous OpenIE studies focusing on English, research on OpenIE for the Portuguese language, particularly employing Deep Learning methods, remains limited. Existing work often relies on datasets automatically translated from English. Moreover, most Deep Learning approaches for OpenIE in Portuguese have adopted a multilingual perspective, treating it as just one language among many in training datasets, thereby often neglecting its unique linguistic characteristics. This thesis presents a comparative analysis of two methodologies, sequence labeling and generative approaches, for the automated extraction of OpenIE relations from Portuguese texts. A core contribution is the development and curation of diverse Portuguese OpenIE datasets to address data scarcity and enable robust evaluation. These include both manually annotated corpora and novel corpora generated using LLMs. The study involves developing and evaluating a sequence labeling model and assessing the performance of generative LLMs on these Portuguese datasets. A comprehensive comparative analysis of these methods is conducted, focusing on their ecacy in extracting OpenIE relations, including abstractive ones, from Portuguese text. This research signicantly contributes to the growing body of literature on the application of Deep Learning techniques for OpenIE in the Portuguese language, addresses critical resource gaps, and lays the foundation for further advancements in this field, particularly in exploring generative and abstractive extraction capabilities.
|
|
|
7
|
-
JAUBERTH WEYLL ABIJAUDE
-
Integrating Blockchain into the Production Line via IoT: A Use Case from Cacu Industry
-
Advisor : GEORGE MARCONI DE ARAUJO LIMA
-
COMMITTEE MEMBERS :
-
BILLY ANDERSON PINHEIRO
-
GUIDO LEMOS DE SOUZA FILHO
-
ALIRIO SANTOS DE SA
-
ALLAN EDGARD SILVA FREITAS
-
GEORGE MARCONI DE ARAUJO LIMA
-
LEOBINO NASCIMENTO SAMPAIO
-
Data: Sep 30, 2025
-
-
Show Abstract
-
This dissertation presents the development of an innovative technological artifact that integrates Internet of Things (IoT) devices, blockchain technology, and a service-oriented middleware to enable the traceability and automated control of the fermentation and drying processes of fine cocoa. The work begins with a contextualization of the subject, followed by a conceptual and technical review of the fundamentals of IoT, consensus pro- tocols used in blockchains, and architectural models that support the efficient integration of IoT and blockchain systems. Within this context, both functional and non-functional requirements are analyzed, particularly those enabling the adoption of blockchain solutions in computationally constrained environments, which are typical in IoT scenarios. Subsequently, the development of a service-oriented middleware is described. This mid- dleware supports semantic interoperability through the use of ontologies and is desig- ned to ensure compatibility among heterogeneous devices. It offers APIs based on the REST architectural style, which were adapted to emulate the behavior of SNMP (Simple Network Management Protocol) messages. This approach led to a significant reduction in message exchange and computational resource consumption. Additionally, the dis- sertation describes the distributed applications developed to configure and manage the control of fermentation and drying actions, supported by custom-designed IoT hardware specifically built for this purpose. The data captured by the sensors are pre-processed and forwarded to the middleware, which then records them in smart contracts deployed on the Ethereum blockchain, thus ensuring the process benefits from the inherent im- mutability, auditability, and reliability provided by blockchain technology. The thesis also presents a contextual overview of the cocoa-producing region of Southern Bahia, dis- cussing the challenges and opportunities associated with the adoption of Agriculture 4.0 technologies. As part of the evaluation of the proposed solution, a proof of concept was developed, demonstrating key results such as computational resource savings, messaging efficiency, and reliability in data traceability. Finally, it is worth noting that beyond its applicability in the cocoa sector, the developed middleware has the potential to be exten- ded to other domains, such as asset inventory control and water resource management, thus demonstrating the versatility and robustness of the proposed architecture.
|
|
|
8
|
-
GEORGE PACHECO PINTO
-
FoT-PDS: A User-Centric Paradigm for Privacy-Preserving IoT
-
Advisor : CASSIO VINICIUS SERAFIM PRAZERES
-
COMMITTEE MEMBERS :
-
MIRIAM AKEMI MANABE CAPRETZ
-
CASSIO VINICIUS SERAFIM PRAZERES
-
FREDERICO ARAUJO DURAO
-
GUSTAVO BITTENCOURT FIGUEIREDO
-
RENATO DE FREITAS BULCÃO NETO
-
Data: Nov 18, 2025
-
-
Show Abstract
-
The IoT poses significant challenges to personal data privacy, as it enables pervasive and ubiquitous data collection and processing, often occurring without the user's knowledge and consent. This situation reinforces the privacy paradox phenomenon, which poses a trade-off between the benefits of technologies and services and the associated privacy risks. At the same time, users' perceptions of data collection and value have been changing, increasing their concern about exchanging their data for services and driving a movement toward more control for privacy protection. In this scenario, this thesis introduces FoT-PDS, an original paradigm to address privacy issues in the IoT context by empowering users with data control, ensuring transparency in data processing, raising awareness of privacy risks, and fostering trust in service providers. It is a user-centric paradigm that integrates the Fog of Things and Personal Data Stores, promoting decentralized data management and granting individuals fine-grained control over who accesses their data and for which purposes. Additionally, the paradigm includes an AI-assisted consent mechanism based on clustering methods to anticipate profiling risks and support informed decision-making by users. Our experimental study results demonstrate that FoT-PDS enhances users' perception of data control, which has a positive and direct impact on privacy awareness and transparency. Moreover, privacy awareness mediates the indirect effect of data control on trust. Further, the technical evaluation demonstrates the feasibility of the consent mechanism and its potential to mitigate profiling risks. These insights provide empirical evidence supporting the adoption of FoT-PDS as a viable and effective approach for promoting data control and mitigating privacy risks in the IoT context.
|
|
|
9
|
-
MAYKA DE SOUZA LIMA
-
CONCEPTUAL FRAMEWORK FOR THE INSTRUCTIONAL DESIGN OF VIRTUAL LEARNING ENVIRONMENTS
-
Advisor : RITA SUZANA PITANGUEIRA MACIEL
-
COMMITTEE MEMBERS :
-
RITA SUZANA PITANGUEIRA MACIEL
-
IVAN DO CARMO MACHADO
-
LAIS DO NASCIMENTO SALVADOR
-
VICTOR STROELE
-
PEDRO HENRIQUE DIAS VALLE
-
Data: Dec 2, 2025
-
-
Show Abstract
-
With the advancement of digital technologies, Virtual Learning Environments (VLEs) have evolved from simple content repositories to interactive spaces that support active methodologies and promote meaningful teaching and learning experiences. However, many of these environments still need to align with the principles of Instructional Design (ID), which makes it difficult for teachers to develop coherent pedagogical practices. Given this scenario, this research aimed to propose, structure, and evaluate a conceptual framework to support education professionals in the use of pedagogical strategies in VLEs, based on instructional design. The work was conducted according to the Design Science Research (DSR) approach and followed a methodological path composed of: a systematic mapping study, application of a survey with 276 education professionals, conducting interviews with 26 teachers, triangulating the data, evaluating the conceptual structure in 03 cycles of focus groups, and a case study with 10 computer science teachers. Based on the analysis of data collected from the survey and interviews, three active methodologies that are recurrent and most significant in the practice of educators were identified: the flipped classroom, Problem-Based Learning, and Project-Based Learning. These methodologies were adopted as structuring axes of the framework, due to their relevance in promoting collaboration and problem-solving in educational contexts mediated by Digital Technological Resources (DTRs). The resulting conceptual framework aims to integrate elements of instructional design with the functionalities of VLEs, providing a theoretical and practical guide that supports informed decision-making regarding the selection of methodologies, strategies, and digital resources. Its evaluation demonstrated applicability and relevance, especially as a planning and reflection tool for teachers in hybrid and remote contexts. Thus, this thesis represents the conclusion of an investigative cycle and contributes to the field of educational technologies by offering a structured model that strengthens the pedagogical use of VLEs.
|
|
|
10
|
-
GUILHERME BRAGA ARAUJO
-
On Leveraging Named Data Networking for Vehicular and Edge Computing Applications
-
Advisor : LEOBINO NASCIMENTO SAMPAIO
-
COMMITTEE MEMBERS :
-
ANTONIO ALFREDO FERREIRA LOUREIRO
-
BRUNO PEREIRA DOS SANTOS
-
EDUARDO COELHO CERQUEIRA
-
LEOBINO NASCIMENTO SAMPAIO
-
MAYCON LEONE MACIEL PEIXOTO
-
Data: Dec 12, 2025
-
-
Show Abstract
-
Modern vehicles are increasingly equipped with advanced processing, storage, and wireless communication capabilities, making them more intelligent and interconnected. In this context, Vehicular Ad Hoc Networks (VANETs) are crucial for enabling communication models among vehicles, distributed infrastructures, and monitoring devices. Despite these benefits, integrating VANET solutions into Smart Cities for Edge Computing scenarios requires a network architecture capable of handling heterogeneous communication requirements across distributed applications in multi-access environments. Vehicular mobility, dynamic communication models, and security aspects pose fundamental challenges to the development of scalable and reliable applications. Constant topology changes, caused by mobility, lead to intermittent connections that further complicate the design of distributed applications. Under these conditions, the TCP/IP architecture exhibits significant limitations. In contrast, emerging Information-Centric Networking models—and, in particular, the Named Data Networking (NDN) architecture—have arisen as promising alternatives, providing an optimized communication model with additional network-layer services like security support, content-centric delivery, and independence from the physical location of data. This thesis investigates the development of new classes of vehicular applications in realistic Smart City and Edge Computing scenarios, supported by the NDN architecture. First, it presents a study of vehicular networks, addressing their main characteristics, applications, and critical aspects. Then, it provides a detailed analysis of the data-centric communication model, highlighting its intrinsic properties and advantages over the TCP/IP model in VANETs. Finally, proof-of-concept implementations are proposed for different distributed applications, showcasing practical design aspects and employing NDN as the underlying communication architecture between network entities. The main contributions of this thesis are as follows: (i) the development of an environment for simulating realistic applications in Vehicular Named Data Networking, the NDN4IVC simulator; (ii) the design of an Intelligent Transportation System, named NDN-Waze, for monitoring and optimizing vehicular traffic; (iii) the creation of a service-oriented architecture for data offloading in Vehicular Edge Computing scenarios, called iETR (intelligent Edge-Traffic Routing), designed to efficiently transport large data volumes with the support of the NDN architecture and the orchestration of mobile agents (i.e., vehicles)—data mules—integrated into the edge computing environment. The proposed solutions were evaluated through simulations, which demonstrated that the intrinsic properties of the NDN architecture favor the development of new classes of services in emerging networks.
|
|
|
11
|
-
FRANCISCO RENATO CAVALCANTE ARAÚJO
-
Service Differentiation in Named Data Networks: New Perspectives through Naming Semantics, Caching, and State Maintenance
-
Advisor : LEOBINO NASCIMENTO SAMPAIO
-
COMMITTEE MEMBERS :
-
PAULO MILHEIRO MENDES
-
RODRIGO BRANDÃO MANSILHA
-
BRUNO PEREIRA DOS SANTOS
-
JOSE AUGUSTO SURUAGY MONTEIRO
-
LEOBINO NASCIMENTO SAMPAIO
-
Data: Dec 16, 2025
-
-
Show Abstract
-
The accelerated growth of applications and services on the Internet, with heterogeneous requirements, demands more efficient service differentiation mechanisms than those offered by traditional IP networks. In this context, Named Data Networking (NDN) emerges as a promising alternative to meet the needs of the current communication model. NDN introduces a paradigm shift from the traditional IP address–oriented architecture by prioritizing content access by name rather than by location, and by offering features such as in-network caching, stateful forwarding, and packet-level security—capabilities that enable more sophisticated resource management. Despite its benefits, NDN by default does not implement service differentiation mechanisms, treating all traffic uniformly, which limits support for applications with distinct Quality of Service (QoS) requirements. Furthermore, the efficiency of on-path caching and conventional forwarding strategies can be compromised in dynamic environments with high competition for resources. This thesis proposes a set of adaptive mechanisms for QoS provisioning in NDN, based on the integration of naming semantics, cache management, and forwarding state maintenance, focusing on service differentiation. The developed works include: a cooperative and adaptive forwarding mechanism for traffic control based on forwarding state to mitigate Interest flooding in wireless NDN; an opportunistic and cooperative caching mechanism to support producer mobility; a dynamic, content-centric load-balancing mechanism that promotes efficient traffic distribution and differentiated content delivery; and an integrated approach that combines naming semantics and cache management to enhance QoS provisioning and the performance of heterogeneous applications. The proposed mechanisms are designed, implemented, and evaluated through simulations using ndnSIM, considering different network topologies and scenarios, including both mobile and wired networks. These mechanisms explore several components of the NDN stack, such as naming, caching, and forwarding. In this way, the thesis presents new perspectives for service differentiation in NDN, demonstrating that the integration among different components has the potential to provide QoS more efficiently and to support heterogeneous applications. This work establishes conceptual foundations and practical mechanisms that can guide future research in this field.
|
|