In the realm of medical diagnostics, achieving heightened accuracy is paramount, leading to the meticulous refinement of AI Models through expert-guided tuning aiming to bolster the precision by ensuring their adaptability to complex datasets and optimizing outcomes across various healthcare…
In the realm of medical diagnostics, achieving heightened accuracy is paramount, leading to the meticulous refinement of AI Models through expert-guided tuning aiming to bolster the precision by ensuring their adaptability to complex datasets and optimizing outcomes across various healthcare sectors. By incorporating expert knowledge into the fine-tuning process, these advanced models become proficient at navigating the intricacies of medical data, resulting in more precise and dependable diagnostic predictions. As healthcare practitioners grapple with challenges presented by conditions requiring heightened sensitivity, such as cardiovascular diseases, continuous blood glucose monitoring, the application of nuanced refinement in Transformer Models becomes indispensable. Temporal data, a common feature in medical diagnostics, presents unique challenges for Transformer Models characterized by sequential observations over time, requiring models to capture intricate temporal dependencies and complex patterns effectively. In the study, two pivotal healthcare scenarios are delved into: the detection of Coronary Artery Disease (CAD) using Stress ECGs and the identification of psychological stress using Continuous Glucose Monitoring (CGM) data. The CAD dataset was obtained from the Mayo Clinic Integrated Stress Center (MISC) database, which encompassed 100,000 Exercise Stress ECG signals (n=1200), sourced from multiple Mayo Clinic facilities. For the CGM scenario, expert knowledge was utilized to generate synthetic data using the Bergman minimal model, which was then fed to the transformers for classification. Implementation in the CAD example yielded a remarkable 28% Positive Predictive Value (PPV) improvement over the current state-of-the-art, reaching an impressive 91.2%. This significant enhancement demonstrates the efficacy of the approach in enhancing diagnostic accuracy and underscores the transformative impact of expert-guided fine-tuning in medical diagnostics.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
American Sign Language (ASL) is used for Deaf and Hard of Hearing (DHH) individuals to communicate and learn in a classroom setting. In ASL, fingerspelling and gestures are two primary components used for communication. Fingerspelling is commonly used for words…
American Sign Language (ASL) is used for Deaf and Hard of Hearing (DHH) individuals to communicate and learn in a classroom setting. In ASL, fingerspelling and gestures are two primary components used for communication. Fingerspelling is commonly used for words that do not have a specifically designated sign or gesture. In technical contexts, such as Computer Science curriculum, there are many technical terms that fall under this category. Most of its jargon does not have standardized ASL gestures; therefore, students, educators, and interpreters alike have been reliant on fingerspelling, which poses challenges for all parties. This study investigates the efficacy of both fingerspelling and gestures with fifteen technical terms that do have standardized gestures. The terms’ fingerspelling and gesture are assessed based on preference, ease of use, ease of learning, and time by research subjects who were selected as DHH individuals familiar with ASL.
The data is collected in a series of video recordings by research subjects as well as a post-participation questionnaire. Each research subject has produced thirty total videos, two videos to fingerspell and gesture each technical term. Afterwards, they completed a post-participation questionnaire in which they indicated their preference and how easy it was to learn and use both fingerspelling and gestures. Additionally, the videos have been analyzed to determine the time difference between fingerspelling and gestures. Analysis reveals that gestures are favored over fingerspelling as they are generally preferred, considered easier to learn and use, and faster. These results underscore the significance for standardized gestures in the Computer Science curriculum for accessible learning that enhances communication and promotes inclusion.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Wardriving is when prospective malicious hackers drive with a portable computer to sniff out and map potentially vulnerable networks. With the advent of smart homes and other Internet of Things devices, this poses the possibility of more unsecure targets. The…
Wardriving is when prospective malicious hackers drive with a portable computer to sniff out and map potentially vulnerable networks. With the advent of smart homes and other Internet of Things devices, this poses the possibility of more unsecure targets. The hardware available to the public has also miniaturized and gotten more powerful. One no longer needs to carry a complete laptop to carry out network mapping. With this miniaturization and greater popularity of quadcopter technology, the two can be combined to create a more efficient wardriving setup in a potentially more target-rich environment. Thus, we set out to create a prototype as a proof of concept of this combination. By creating a bracket for a Raspberry Pi to be mounted to a drone with other wireless sniffing equipment, we demonstrate that one can use various off the shelf components to create a powerful network detection device. In this write up, we also outline some of the challenges encountered by combining these two technologies, as well as the solutions to those challenges. Adding payload weight to drones that are not initially designed for it causes detrimental effects to various characteristics such as flight behavior and power consumption. Less computing power is available due to the miniaturization that must take place for a drone-mounted solution. Communication between the miniature computer and a ground control computer is also essential in overall system operation. Below, we highlight solutions to these various problems as well as improvements that can be implemented for maximum system effectiveness.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Wardriving is when prospective malicious hackers drive with a portable computer to sniff out and map potentially vulnerable networks. With the advent of smart homes and other Internet of Things devices, this poses the possibility of more unsecure targets. The…
Wardriving is when prospective malicious hackers drive with a portable computer to sniff out and map potentially vulnerable networks. With the advent of smart homes and other Internet of Things devices, this poses the possibility of more unsecure targets. The hardware available to the public has also miniaturized and gotten more powerful. One no longer needs to carry a complete laptop to carry out network mapping. With this miniaturization and greater popularity of quadcopter technology, the two can be combined to create a more efficient wardriving setup in a potentially more target-rich environment. Thus, we set out to create a prototype as a proof of concept of this combination. By creating a bracket for a Raspberry Pi to be mounted to a drone with other wireless sniffing equipment, we demonstrate that one can use various off the shelf components to create a powerful network detection device.
In this write up, we also outline some of the challenges encountered by combining these two technologies, as well as the solutions to those challenges. Adding payload weight to drones that are not initially designed for it causes detrimental effects to various characteristics such as flight behavior and power consumption. Less computing power is available due to the miniaturization that must take place for a drone-mounted solution. Communication between the miniature computer and a ground control computer is also essential in overall system operation. Below, we highlight solutions to these various problems as well as improvements that can be implemented for maximum system effectiveness.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided…
Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field of research which is made feasible by advances in Computer Vision and Sign Language Recognition(SLR). Leveraging existing SLR systems for feedback based learning is not feasible because their decision processes are not human interpretable and do not facilitate conceptual feedback to learners. Thus, fundamental research is needed towards designing systems that are modular and explainable. The explanations from these systems can then be used to produce feedback to aid in the learning process.
In this work, I present novel approaches for the recognition of location, movement and handshape that are components of American Sign Language (ASL) using both wrist-worn sensors as well as webcams. Finally, I present Learn2Sign(L2S), a chat- bot based AI tutor that can provide fine-grained conceptual feedback to learners of ASL using the modular recognition approaches. L2S is designed to provide feedback directly relating to the fundamental concepts of ASL using an explainable AI. I present the system performance results in terms of Precision, Recall and F-1 scores as well as validation results towards the learning outcomes of users. Both retention and execution tests for 26 participants for 14 different ASL words learned using learn2sign is presented. Finally, I also present the results of a post-usage usability survey for all the participants. In this work, I found that learners who received live feedback on their executions improved their execution as well as retention performances. The average increase in execution performance was 28% points and that for retention was 4% points.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
\English is a programming language, a method of allowing programmers to write instructions such that a computer may understand and execute said instructions in the form of a program. Though many programming languages exist, this particular language is designed for…
\English is a programming language, a method of allowing programmers to write instructions such that a computer may understand and execute said instructions in the form of a program. Though many programming languages exist, this particular language is designed for ease of development and heavy optimizability in ways that no other programming language is. Building on the principles of Assembly level efficiency, referential integrity, and high order functionality, this language is able to produce extremely efficient code; meanwhile, programmatically defined English-based reusable syntax and a strong, static type system make \English easier to read and write than many existing programming languages. Its generalization of all language structures and components to operators leaves the language syntax open to project-specific syntactical structuring, making it more easily applicable in more cases. The thesis project requirements came in three parts: a compiler to compile \English code into NASM Assembly to produce a final program product; a standard library to define many of the basic operations of the language, including the creation of lists; and C translation library that would utilize \English properties to compile C code using the \English compiler. Though designed and partially coded, the compiler remains incomplete. The standard library, C translation library, and design of the language were completed. Additional tools regarding the language design and implementation were also created, including a Gedit syntax highlighting configuration file; usage documentation describing in a tutorial style the basic usage of the language; and more. Though the thesis project itself may be complete, the \English project will continue in order to produce a new language capable of the abilities possible with the design of this language.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
With the ever-increasing demand for high-end services, technological companies have been forced to operate on high performance servers. In addition to the customer services, the company's internal need to store and manage huge amounts of data has also increased their…
With the ever-increasing demand for high-end services, technological companies have been forced to operate on high performance servers. In addition to the customer services, the company's internal need to store and manage huge amounts of data has also increased their need to invest in High Density Data Centers. As a result, the performance to size of the data center has increased tremendously. Most of the consumed power by the servers is emitted as heat. In a High Density Data Center, the power per floor space area is higher compared to the regular data center. Hence the thermal management of this type of data center is relatively complicated.
Because of the very high power emission in a smaller containment, improper maintenance can result in failure of the data center operation in a shorter period. Hence the response time of the cooler to the temperature rise of the servers is very critical. Any delay in response will constantly lead to increased temperature and hence the server's failure.
In this paper, the significance of this delay time is understood by performing CFD simulation on different variants of High Density Modules using ANSYS Fluent. It was found out that the delay was becoming longer as the size of the data center increases. But the overload temperature, ie. the temperature rise beyond the set-point became lower with the increase in data center size. The results were common for both the single-row and the double-row model. The causes of the increased delay are accounted and explained in detail manner in this paper.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven.…
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This dissertation details experiments investigating the quantity and quality of heat that can be captured from a liquid-cooled microprocessor on a computer server blade from a data center. The experiments involve four liquid-cooling setups and associated heat-extraction, including a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Uniquely, the investigation establishes an interesting and useful relationship simultaneously among CPU temperatures, power, and utilization levels. In response to the system data, this project explores the heat, temperature and power effects of adding insulation, varying water flow, CPU loading, and varying the cold plate-to-CPU clamping pressure. The idea is to provide an optimal and steady range of temperatures necessary for a chiller to operate. Results indicate an increasing relationship among CPU temperature, power and utilization. Since the dissipated heat can be captured and removed from the system for reuse elsewhere, the need for electricity-consuming computer fans is eliminated. Thermocouple readings of CPU temperatures as high as 93°C and a calculated CPU thermal energy up to 67Wth show a sufficiently high temperature and thermal energy to serve as the input temperature and heat medium input to an absorption chiller. This dissertation performs a detailed analysis of the exergy of a processor and determines the maximum amount of energy utilizable for work. Exergy as a source of realizable work is separated into its two contributing constituents: thermal exergy and informational exergy. The informational exergy is that usable form of work contained within the most fundamental unit of information output by a switching device within a CPU. Exergetic thermal, informational and efficiency values are calculated and plotted for our particular CPU, showing how the datasheet standards compare with experimental values. The dissertation concludes with a discussion of the work's significance.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly…
In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders and multipliers presented in [23] and [24]. First, we show how choice of algorithm and parallel adder design can be used to implement 2D Discrete Cosine Transform (DCT) algorithm with good performance but low area. Our implementation of the 2D DCT has comparable PSNR performance with respect to the algorithm presented in [23] with ~35-50% reduction in area. Next, we use the approximate 2x2 multiplier presented in [24] to implement parallel approximate multipliers. We demonstrate that if some of the 2x2 multipliers in the design of the parallel multiplier are accurate, the accuracy of the multiplier improves significantly, especially when two large numbers are multiplied. We choose Gaussian FIR Filter and Fast Fourier Transform (FFT) algorithms to illustrate the efficacy of our proposed approximate multiplier. We show that application of the proposed approximate multiplier improves the PSNR performance of 32x32 FFT implementation by 4.7 dB compared to the implementation using the approximate multiplier described in [24]. We also implement a state-of-the-art image enlargement algorithm, namely Segment Adaptive Gradient Angle (SAGA) [29], in hardware. The algorithm is mapped to pipelined hardware blocks and we synthesized the design using 90 nm technology. We show that a 64x64 image can be processed in 496.48 µs when clocked at 100 MHz. The average PSNR performance of our implementation using accurate parallel adders and multipliers is 31.33 dB and that using approximate parallel adders and multipliers is 30.86 dB, when evaluated against the original image. The PSNR performance of both designs is comparable to the performance of the double precision floating point MATLAB implementation of the algorithm.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
The energy consumption of data centers is increasing steadily along with the associ- ated power-density. Approximately half of such energy consumption is attributed to the cooling energy, as a result of which reducing cooling energy along with reducing servers energy…
The energy consumption of data centers is increasing steadily along with the associ- ated power-density. Approximately half of such energy consumption is attributed to the cooling energy, as a result of which reducing cooling energy along with reducing servers energy consumption in data centers is becoming imperative so as to achieve greening of the data centers. This thesis deals with cooling energy management in data centers running data-processing frameworks. In particular, we propose ther- mal aware scheduling for MapReduce framework and its Hadoop implementation to reduce cooling energy in data centers. Data-processing frameworks run many low- priority batch processing jobs, such as background log analysis, that do not have strict completion time requirements; they can be delayed by a bounded amount of time. Cooling energy savings are possible by being able to temporally spread the workload, and assign it to the computing equipments which reduce the heat recirculation in data center room and therefore the load on the cooling systems. We implement our scheme in Hadoop and performs some experiments using both CPU-intensive and I/O-intensive workload benchmarks in order to evaluate the efficiency of our scheme. The evaluation results highlight that our thermal aware scheduling reduces hot-spots and makes uniform temperature distribution within the data center possible. Sum- marizing the contribution, we incorporated thermal awareness in Hadoop MapReduce framework by enhancing the native scheduler to make it thermally aware, compare the Thermal Aware Scheduler(TAS) with the Hadoop scheduler (FCFS) by running PageRank and TeraSort benchmarks in the BlueTool data center of Impact lab and show that there is reduction in peak temperature and decrease in cooling power using TAS over FCFS scheduler.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)