27 results on '"Lawrence C. Ng"'
Search Results
2. Micropower electro-magnetic sensors for speech characterization, recognition, verification, and other applications.
- Author
-
John F. Holzrichter, Gregory C. Burnett, Todd J. Gable, and Lawrence C. Ng
- Published
- 1998
- Full Text
- View/download PDF
3. A new approach to learning and recognizing leaf diseases from individual lesions using convolutional neural networks
- Author
-
Lawrence C. Ngugi, Moataz Abdelwahab, and Mohammed Abo-Zahhad
- Subjects
Deep learning ,Precision agriculture ,Leaf disease recognition ,Complex background removal ,Leaf image segmentation ,Lesion classification ,Agriculture (General) ,S1-972 ,Information technology ,T58.5-58.64 - Abstract
Leaf disease recognition using image processing and deep learning techniques is currently a vibrant research area. Most studies have focused on recognizing diseases from images of whole leaves. This approach limits the resulting models’ ability to estimate leaf disease severity or identify multiple anomalies occurring on the same leaf. Recent studies have demonstrated that classifying leaf diseases based on individual lesions greatly enhances disease recognition accuracy. In those studies, however, the lesions were laboriously cropped by hand. This study proposes a semi-automatic algorithm that facilitates the fast and efficient preparation of datasets of individual lesions and leaf image pixel maps to overcome this problem. These datasets were then used to train and test lesion classifier and semantic segmentation Convolutional Neural Network (CNN) models, respectively. We report that GoogLeNet’s disease recognition accuracy improved by more than 15% when diseases were recognized from lesion images compared to when disease recognition was done using images of whole leaves. A CNN model which performs semantic segmentation of both the leaf and lesions in one pass is also proposed in this paper. The proposed KijaniNet model achieved state-of-the-art segmentation performance in terms of mean Intersection over Union (mIoU) score of 0.8448 and 0.6257 for the leaf and lesion pixel classes, respectively. In terms of mean boundary F1 score, the KijaniNet model attained 0.8241 and 0.7855 for the two pixel classes, respectively. Lastly, a fully automatic algorithm for leaf disease recognition from individual lesions is proposed. The algorithm employs the semantic segmentation network cascaded to a GoogLeNet classifier for lesion-wise disease recognition. The proposed fully automatic algorithm outperforms competing methods in terms of its superior segmentation and classification performance despite being trained on a small dataset.
- Published
- 2023
- Full Text
- View/download PDF
4. Comparison of error detectors for time delay tracking.
- Author
-
Harold F. Jarvis Jr. and Lawrence C. Ng
- Published
- 1983
- Full Text
- View/download PDF
5. Improved time delay estimation in the presence of interference.
- Author
-
Wolfgang K. Fischer and Lawrence C. Ng
- Published
- 1983
- Full Text
- View/download PDF
6. Measurements of glottal structure dynamics
- Author
-
Robert M. Sharpe, Nathan J. Champagne, Jeffrey S. Kallman, James B. Kobler, John J. Rosowski, Robert E. Hillman, John F. Holzrichter, Lawrence C. Ng, and Gerry J. Burke
- Subjects
Glottis ,Acoustics and Ultrasonics ,Acoustics ,Speech coding ,Biosensing Techniques ,Models, Biological ,Vibration ,Signal ,Imaging, Three-Dimensional ,Phonation ,Arts and Humanities (miscellaneous) ,Homodyne detection ,medicine ,Humans ,Computer Simulation ,Physics ,Radiation ,Middle Aged ,Biomechanical Phenomena ,Interferometry ,medicine.anatomical_structure ,Vocal folds ,Female ,Vocal tract - Abstract
Low power, radarlike electromagnetic (EM) wave sensors, operating in a homodyne interferometric mode, are being used to measure tissue motions in the human vocal tract during speech. However, when these and similar sensors are used in front of the laryngeal region during voiced speech, there remains an uncertainty regarding the contributions to the sensor signal from vocal fold movements versus those from pressure induced trachea-wall movements. Several signal-source hypotheses are tested by performing experiments with a subject who had undergone tracheostomy, and who still was able to phonate when her stoma was covered (e.g., with a plastic plate). Laser-doppler motion-measurements of the subject's posterior trachea show small tissue movements, about 15 microns, that do not contribute significantly to signals from presently used EM sensors. However, signals from the anterior wall do contribute. EM sensor and air-pressure measurements, together with 3-D EM wave simulations, show that EM sensors measure movements of the vocal folds very well. The simulations show a surprisingly effective guiding of EM waves across the vocal fold membrane, which, upon glottal opening, are interrupted and reflected. These measurements are important for EM sensor applications to speech signal de-noising, vocoding, speech recognition, and diagnostics.
- Published
- 2005
- Full Text
- View/download PDF
7. Multisensor multitarget time delay vector estimation.
- Author
-
Lawrence C. Ng and Yaakov Bar-Shalom
- Published
- 1986
- Full Text
- View/download PDF
8. Comparison between electroglottography and electromagnetic glottography
- Author
-
Wayne A. Lea, Ingo R. Titze, Gregory Burnett, Brad H. Story, Lawrence C. Ng, and John F. Holzrichter
- Subjects
Adult ,Male ,Diffraction ,Glottis ,Acoustics and Ultrasonics ,Forward scatter ,Acoustics ,Transducers ,Signal ,Optics ,Phonation ,Arts and Humanities (miscellaneous) ,medicine ,Humans ,Electroglottograph ,Physics ,business.industry ,Middle Aged ,Electric Stimulation ,medicine.anatomical_structure ,Reflection (physics) ,Falsetto ,business ,Electromagnetic Phenomena - Abstract
Newly developed glottographic sensors, utilizing high-frequency propagating electromagnetic waves, were compared to a well-established electroglottographic device. The comparison was made on four male subjects under different phonation conditions, including three levels of vocal fold adduction (normal, breathy, and pressed), three different registers (falsetto, chest, and fry), and two different pitches. Agreement between the sensors was always found for the glottal closure event, but for the general wave shape the agreement was better for falsetto and breathy voice than for pressed voice and vocal fry. Differences are attributed to the field patterns of the devices. Whereas the electroglottographic device can operate only in a conduction mode, the electromagnetic device can operate in either the forward scattering (diffraction) mode or in the backward scattering (reflection) mode. Results of our tests favor the diffraction mode because a more favorable angle imposed on receiving the scattered (reflected) signal did not improve the signal strength. Several observations are made on the uses of the electromagnetic sensors for operation without skin contact and possibly in an array configuration for improved spatial resolution within the glottis.
- Published
- 2000
- Full Text
- View/download PDF
9. Speech articulator measurements using low power EM-wave sensors
- Author
-
Lawrence C. Ng, Gregory C. Burnett, Wayne A. Lea, and John F. Holzrichter
- Subjects
Speech Acoustics ,Speech production ,Acoustics and Ultrasonics ,Soft palate ,Computer science ,Articulator ,Acoustics ,Physics::Medical Physics ,Equipment Design ,Speech processing ,law.invention ,Power (physics) ,medicine.anatomical_structure ,Arts and Humanities (miscellaneous) ,Computer Science::Sound ,Tongue ,law ,medicine ,Humans ,Speech ,Radar ,Electromagnetic Phenomena - Abstract
Very low power electromagnetic (EM) wave sensors are being used to measure speech articulator motions as speech is produced. Glottal tissue oscillations, jaw, tongue, soft palate, and other organs have been measured. Previously, microwave imaging (e.g., using radar sensors) appears not to have been considered for such monitoring. Glottal tissue movements detected by radar sensors correlate well with those obtained by established laboratory techniques, and have used to estimate a voiced excitation function for speech processing applications. The noninvasive access, coupled with the small size, low power, and high resolution of these new sensors, permit promising research and development applications in speech production, communication disorders, speech recognition and related topics.
- Published
- 1998
- Full Text
- View/download PDF
10. Characterization of Ring Laser Gyro Performance Using the Allan Variance Method
- Author
-
Lawrence C. Ng and Darryll J. Pines
- Subjects
Physics ,Attitude control system ,business.industry ,Applied Mathematics ,Aerospace Engineering ,Spectral density ,Dirac delta function ,Characterization (materials science) ,symbols.namesake ,Optics ,Space and Planetary Science ,Control and Systems Engineering ,Inertial measurement unit ,symbols ,Ring laser gyroscope ,Electrical and Electronic Engineering ,Allan variance ,business - Published
- 1997
- Full Text
- View/download PDF
11. Real-time mass property estimation
- Author
-
Kamal Youcef-Toumi and Lawrence C. Ng., Massachusetts Institute of Technology. Department of Mechanical Engineering., Wright, Andrew M. (Andrew Milton), 1976, Kamal Youcef-Toumi and Lawrence C. Ng., Massachusetts Institute of Technology. Department of Mechanical Engineering., and Wright, Andrew M. (Andrew Milton), 1976
- Abstract
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2000., Includes bibliographical references (p. 133)., by Andrew M. Wright., S.M.
- Published
- 2014
12. Recent advances in image processing techniques for automated leaf pest and disease recognition – A review
- Author
-
Lawrence C. Ngugi, Moataz Abelwahab, and Mohammed Abo-Zahhad
- Subjects
Precision agriculture ,Machine learning ,Plant disease recognition ,Image processing ,Convolutional neural networks ,Agriculture (General) ,S1-972 ,Information technology ,T58.5-58.64 - Abstract
Fast and accurate plant disease detection is critical to increasing agricultural productivity in a sustainable way. Traditionally, human experts have been relied upon to diagnose anomalies in plants caused by diseases, pests, nutritional deficiencies or extreme weather. However, this is expensive, time consuming and in some cases impractical. To counter these challenges, research into the use of image processing techniques for plant disease recognition has become a hot research topic. In this paper, we provide a comprehensive review of recent studies carried out in the area of crop pest and disease recognition using image processing and machine learning techniques. We hope that this work will be a valuable resource for researchers in this area of crop pest and disease recognition using image processing techniques. In particular, we concentrate on the use of RGB images owing to the low cost and high availability of digital RGB cameras. We report that recent efforts have focused on the use of deep learning instead of training shallow classifiers using hand-crafted features. Researchers have reported high recognition accuracies on particular datasets but in many cases, the performance of those systems deteriorated significantly when tested on different datasets or in field conditions. Nevertheless, progress made so far has been encouraging. Experimental results showing the leaf disease recognition performance of ten CNN architectures in terms of recognition accuracy, recall, precision, specificity, F1-score, training duration and storage requirements are also presented. Subsequently, recommendations are made on the most suitable architectures to deploy in conventional as well as mobile/embedded computing environments. We also discuss some of the unresolved challenges that need to be addressed in order to develop practical automatic plant disease recognition systems for use in field conditions.
- Published
- 2021
- Full Text
- View/download PDF
13. Voiced Excitations
- Author
-
Robert Steinkraus, Lawrence C. Ng, and John F. Holzricher
- Published
- 2004
- Full Text
- View/download PDF
14. Denoising of human speech using combined acoustic and EM sensor signal processing
- Author
-
Gregory C. Burnett, Lawrence C. Ng, John F. Holzrichter, and Todd J. Gable
- Subjects
Speech enhancement ,Speech production ,Signal processing ,Voice activity detection ,Computer science ,Noise (signal processing) ,Noise reduction ,Speech recognition ,Speech processing ,Digital filter ,Signal - Abstract
Low power EM radar-like sensors have made it possible to measure properties of the human speech production system in real-time, without acoustic interference. This greatly enhances the quality and quantify of information for many speech related applications (see Holzrichter, Burnett, Ng, and Lea, J. Acoustic. Soc. Am. 103 (1) 622 (1998)). By using combined glottal-EM-sensor-and acoustic-signals, segments of voiced, unvoiced, and no-speech can be reliably defined. Real-time de-noising filters can be constructed to remove noise from the user's corresponding speech signal.
- Published
- 2002
- Full Text
- View/download PDF
15. Real-time speech masking using electromagnetic-wave acoustic sensors
- Author
-
Lawrence C. Ng, John F. Holzrichter, and John T. Chang
- Subjects
Masking (art) ,Voice activity detection ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Computer science ,Acoustics ,Bandwidth (signal processing) ,Acoustic model ,Speech processing ,Electromagnetic radiation ,Voice activity - Abstract
Voice activity sensors commonly measure voiced-speech-induced skin vibrations using contact microphones or related techniques. We show that micro-power EM wave sensors have advantages over acoustic techniques by directly measuring vocal-fold motions, especially during closure. This provide 0.1 ms timing accuracy (i.e., ~10 kHz bandwidth) relative to the corresponding acoustic signal, with data arriving ~0.5 ms in advanced of the acoustic speech leaving the speaker’s mouth. Preceding or following unvoiced and silent speech segments can then be well defined. These characteristics enable anti-speech waves to be generated or prior recorded waves recalled, synchronized, and broadcast with high accuracy to mask the user’s real-time speech signal. A particularly useful masking process uses an acoustic voiced signal from the prior voiced speech period which is inverted, carefully timed, and rebroadcast in phase with the presently being spoken acoustic signal. This leads to real-time cancellation of a substantial ...
- Published
- 2013
- Full Text
- View/download PDF
16. System and method for characterizing voiced excitations of speech and acoustic signals, removing acoustic noise from speech, and synthesizing speech
- Author
-
Gregory C. Burnett, John F. Holzrichter, and Lawrence C. Ng
- Subjects
Speech production ,Voice activity detection ,Acoustics and Ultrasonics ,Computer science ,Microphone ,Acoustics ,Articulator ,Acoustic model ,Speech synthesis ,Speech processing ,computer.software_genre ,Noise ,Arts and Humanities (miscellaneous) ,computer ,Active noise control - Abstract
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
- Published
- 2006
- Full Text
- View/download PDF
17. Speaker verification system using acoustic data and non-acoustic data
- Author
-
Lawrence C. Ng, John F. Holzrichter, Gregory C. Burnett, and Todd J. Gable
- Subjects
Set (abstract data type) ,Speaker verification ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Computer Science::Sound ,Computer science ,Speech recognition ,Characterization (mathematics) ,Identity (music) ,Shape parameter - Abstract
A method and system for speech characterization. One embodiment includes a method for speaker verification which includes collecting data from a speaker, wherein the data comprises acoustic data and non-acoustic data. The data is used to generate a template that includes a first set of “template” parameters. The method further includes receiving a real-time identity claim from a claimant, and using acoustic data and non-acoustic data from the identity claim to generate a second set of parameters. The method further includes comparing the first set of parameters to the set of parameters to determine whether the claimant is the speaker. The first set of parameters and the second set of parameters include at least one purely non-acoustic parameter, including a non-acoustic glottal shape parameter derived from averaging multiple glottal cycle waveforms.
- Published
- 2006
- Full Text
- View/download PDF
18. EM sensor measurement of glottal structure versus time
- Author
-
John F. Holzricher, John J. Rosowski, Lawrence C. Ng, Gerald J. Burke, and James B. Kobler
- Subjects
Acoustics and Ultrasonics ,Microphone ,Acoustics ,Signal ,Articulatory phonetics ,medicine.anatomical_structure ,Amplitude ,Arts and Humanities (miscellaneous) ,Vocal folds ,medicine ,Closing (morphology) ,Electroglottograph ,Vocal tract ,Geology - Abstract
EM wave sensors are being used to measure human vocal tract movements during voiced speech. However, when used in the glottal region there remains uncertainty regarding the contributions to the sensor signal from the vocal fold opening and closing versus those from pressure induced trachea–wall movements. Several signal source hypotheses were tested on a subject who had undergone tracheostomy 4 years ago as a consequence of laryngeal paresis. Measurements of vocal fold and tracheal wall motions were made using an EM sensor, a laser‐Doppler velocimeter, and an electroglottograph. Simultaneous acoustic data came from a subglottal pressure sensor and a microphone at the lips. Extensive 3‐D numerical simulations of EM wave propagation into the neck were performed in order to estimate the amplitude and phase of the reflected EM waves from the 2 different sources. The simulations and experiments show that these sensors measure, depending upon location, both the opening and closing of the vocal folds and the mov...
- Published
- 2002
- Full Text
- View/download PDF
19. Background speaker noise removal using combined EM sensor/acoustic signals
- Author
-
Gregory C. Burnett, Lawrence C. Ng, Todd J. Gable, and John F. Holzrichter
- Subjects
Acoustics and Ultrasonics ,Computer science ,Microphone ,Articulator ,Acoustics ,Pharynx ,Measure (mathematics) ,Signal ,Background noise ,Noise ,medicine.anatomical_structure ,Arts and Humanities (miscellaneous) ,Vocal folds ,medicine ,Vocal tract ,Gesture - Abstract
Recently, very low‐power EM radarlike sensors have been used to measure the macro‐ and micro‐motions of human speech articulators as human speech is produced [see Holzrichter et al., J. Acoust. Soc. Am. 103, 622 (1998)]. These sensors can measure tracheal wall motions, associated with the air pressure build up and fall as the vocal folds open and close, leading to a voiced speech excitation function. In addition, they provide generalized motion measurements of vocal tract articulator gestures that lead to speech formation. For example, tongue, jaw, lips, velum, and pharynx motions have been measured as speech is produced. Since the EM sensor information is independent of acoustic air pressure waves, it is independent of the state of the acoustic background noise spectrum surrounding the speaker. By correlating the two streams of information together, from a microphone and (one or more) EM sensor signals, to characterize a speaker’s speech signal, much of the background speaker noise can be eliminated in real time. This paper presents several algorithms to demonstrate the added noise suppression capability of the glottal EM sensors (GEMS). [Work supported by NSF and DOE.]
- Published
- 1999
- Full Text
- View/download PDF
20. Speaker verification performance comparison based on traditional and electromagnetic sensor pitch extraction
- Author
-
Gregory C. Burnett, Lawrence C. Ng, John F. Holzrichter, and Todd J. Gable
- Subjects
Noise ,Dynamic time warping ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Computer Science::Sound ,Computer science ,Acoustics ,Ambient noise level ,Pitch detection algorithm ,White noise ,Audio signal processing ,computer.software_genre ,computer - Abstract
This work compares the speaker verification performance between a traditional acoustic‐only pitch extraction to a new electromagnetic (EM) sensor based pitch approach system. The pitch estimation approach was developed at the Lawrence Livermore National Laboratory (LLNL) utilizing Glottal Electromagnetic Micropower Sensors (GEMS, also see http://speech.llnl.gov/). This work expands previous pitch detection work by Burnett et al. [IEEE Trans. Speech and Audio Processing (to be published)] to the specific application of speaker verification using dynamic time warping. Clearly, a distinct advantage of GEMS is its insensitivity to acoustic ambient noise. This work demonstrates the clear advantage of the GEMS pitch extraction to improve speaker verification error rates. Cases with added white noise and other speech noise were also examined to show the strengths of the GEMS sensor in these conditions. The EM sensor speaker verification process operated without change over signal‐to‐noise (SNR) conditions ranging from −20 to −2.5 dB; the acoustic algorithms became unusable at SNR exceeding −10 dB. [Work supported by NSF and DOE.]
- Published
- 1999
- Full Text
- View/download PDF
21. Voiced excitation functions calculated from micropower impulse radar information
- Author
-
Lawrence C. Ng, Todd J. Gable, John F. Holzrichter, and Gregory C. Burnett
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Computer science ,Acoustics ,Speech synthesis ,Micropower ,Impulse radar ,Impulse (physics) ,computer.software_genre ,computer ,Transfer function ,Vocal tract ,Articulatory phonetics - Abstract
Efforts underway at the Lawrence Livermore National Laboratory to use newly designed micropower impulse radars (MIR) to measure in real time the excitation function of the vocal tract will be presented. Studies undertaken in collaboration with the University of California at Davis and the University of Iowa with high‐speed laryngoscopic cameras, electroglottographs, flow masks, and subglottal pressure transducers have solidified the relationship between the signal returned by the MIR and the voiced excitation function of the vocal tract. As a result, for the first time a transfer function of the vocal tract can be calculated in real time and with unprecedented clarity for voiced speech. This new capability could have significant implications for improvements in speech recognition and speech synthesis processing.
- Published
- 1997
- Full Text
- View/download PDF
22. Micropower radar measurements of human vocal articulator motions
- Author
-
John F. Holzrichter, Rebecca J. Leonard, Gregory Burnett, Wayne A. Lea, and Lawrence C. Ng
- Subjects
Acoustics and Ultrasonics ,Computer science ,Interface (computing) ,Acoustics ,Articulator ,Micropower ,Motion (physics) ,law.invention ,medicine.anatomical_structure ,Arts and Humanities (miscellaneous) ,law ,Tongue ,Vocal folds ,medicine ,Radar ,Vocal tract - Abstract
A major impediment to speech research is the lack of readily available scientific means of measuring vocal tract motion while a subject is speaking. Several experiments have been conducted recently using the micropower radars, invented at the Lawrence Livermore National Laboratory (LLNL), to detect speech articulator motions including vocal folds, lips, and tongue with considerable successes. In order to establish the scientific accuracy of the articulator measurements, it is important to quantify the radar returns from known tissue interface positions. One of the most important articulatory systems is the glottal structure which defines the vocalized excitation function of human speech. This structure can now be measured in real time, and therefore one can describe the excitation function in real time for each speech time frame. Calibration experiments have been conducted to relate the radar return signals to glottal opening, as well as airflow and air pressure. These results will be presented in detail....
- Published
- 1996
- Full Text
- View/download PDF
23. Uses of micropower radars for speech coding and applications
- Author
-
Gregory Burnett, Wayne A. Lea, Lawrence C. Ng, and John F. Holzrichter
- Subjects
Voice activity detection ,Acoustics and Ultrasonics ,Computer science ,Acoustics ,Speech recognition ,Speech coding ,Acoustic model ,Speech organ ,Speech processing ,Linear predictive coding ,Conjunction (grammar) ,Arts and Humanities (miscellaneous) ,Codec2 ,Sound pressure - Abstract
It has recently become possible to measure the positions and motions of the human speech organs, as speech is being articulated, by using micropower radars in a noninvasive manner. Using these instruments the vocalized excitation function of human speech is measured and thereby the transfer function of each constant vocalized speech unit is obtained by deconvolving the output acoustic pressure from the input excitation function. In addition, the positions of the tongue, lips, jaw, velum, and glottal tissues are measured for each speech unit. Using these data, very descriptive feature vectors for each acoustic speech unit were able to be formed. It is believed that these new data, in conjunction with presently obtained acoustic data, will lead to more efficient speech coding, recognition, synthesis, telephony, and prosthesis.
- Published
- 1996
- Full Text
- View/download PDF
24. Measurements of glottal structure dynamics .
- Author
-
John F. Holzrichter, Lawrence C. Ng, Gerry J. Burke, Nathan J. Champagne, Jeffrey S. Kallman, Robert M. Sharpe, James B. Kobler, Robert E. Hillman, and John J. Rosowski
- Subjects
- *
DETECTORS , *DIAPHRAGMS (Mechanical devices) , *SPEECH perception , *SIGNALS & signaling - Abstract
Low power, radarlike electromagnetic (EM) wave sensors, operating in a homodyne interferometric mode, are being used to measure tissue motions in the human vocal tract during speech. However, when these and similar sensors are used in front of the laryngeal region during voiced speech, there remains an uncertainty regarding the contributions to the sensor signal from vocal fold movements versus those from pressure induced trachea-wall movements. Several signal-source hypotheses are tested by performing experiments with a subject who had undergone tracheostomy, and who still was able to phonate when her stoma was covered (e.g., with a plastic plate). Laser-doppler motion-measurements of the subjects posterior trachea show small tissue movements, about 15 microns, that do not contribute significantly to signals from presently used EM sensors. However, signals from the anterior wall do contribute. EM sensor and air-pressure measurements, together with 3-D EM wave simulations, show that EM sensors measure movements of the vocal folds very well. The simulations show a surprisingly effective guiding of EM waves across the vocal fold membrane, which, upon glottal opening, are interrupted and reflected. These measurements are important for EM sensor applications to speech signal de-noising, vocoding, speech recognition, and diagnostics. 2005 Acoustical Society of America. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
25. Fast‐moving average recursive least‐mean‐squares fit
- Author
-
Robert A. LaTourette, Lawrence C. Ng, and Adam Siconolfi
- Subjects
Reduction (complexity) ,Least mean squares filter ,Mathematical optimization ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Computational complexity theory ,Moving average ,Computation ,Process (computing) ,Filter (signal processing) ,Algorithm ,Mathematics - Abstract
A new approach is developed to reduce the computational complexity of a moving average least‐mean‐squares fit (LMSF) procedure. For a long data window, a traditional batch approach would result in an order N number of operations, where N is the window length. This study shows that the moving average batch LMSF procedure could be made equivalent to a recursive process with identical filter memory length, but at an order of reduction in computation load. The increase in speed due to reduce computation could make the moving average LMSF procedure competitive for many real‐time processing applications.
- Published
- 1989
- Full Text
- View/download PDF
26. Equivalent bandwidth of a general class of polynomial smoothers
- Author
-
Robert A. LaTourette and Lawrence C. Ng
- Subjects
Frequency response ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Colored ,Control theory ,Bandwidth (signal processing) ,Algorithm ,Mathematics - Abstract
This paper presents a detailed investigation of the properties of a general class of least‐mean‐square‐fit (LMSF) smoothers in the presence of a white or colored input sequence. The results of the investigation show that the LMSF can be described as a low‐pass filter whose frequency response characteristics can be calculated exactly. A particularly useful result derived from the frequency response characteristics is the LMSF equivalent bandwidth. It was shown that knowledge of LMSF bandwidth, plus knowledge of the input bandwidth, provides the second‐order statistical description of the LMSF output noise process. Results of the analysis are verified by extensive computer simulation.
- Published
- 1983
- Full Text
- View/download PDF
27. The effect of target motion on bearing estimates from a beam interpolation algorithm
- Author
-
Lawrence C. Ng and Robert A. LaTourette
- Subjects
Physics ,Acoustic array ,Bearing (mechanical) ,Acoustics and Ultrasonics ,business.industry ,Process (computing) ,Motion (geometry) ,law.invention ,Peak response ,Optics ,Arts and Humanities (miscellaneous) ,law ,Physics::Accelerator Physics ,business ,Beam (structure) ,Energy (signal processing) ,Interpolation - Abstract
An initial estimate of a target's bearing from an acoustic array of preformed beams is determined by choosing the beam pointing direction with the largest detected energy. An attempt to improve on this initial bearing estimate is accomplished by interpolating the response of three beams centered about the peak response. This interpolation process is designed to yield an improved estimate of the target bearing between beam pointing directions. This paper is a study of the merits of this beam interpolation process in the presence of target motion.
- Published
- 1988
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.