18 results on '"Rakshit A"'
Search Results
2. Computational Methods to Analyze Structure and Property of Peptoids.
- Author
-
Jain, Rakshit Kumar
- Published
- 2023
3. Liquid swine manure as a nitrogen source for corn and soybean production
- Author
-
Rakshit, Sudipta, primary
- Full Text
- View/download PDF
4. Application of Image Processing and Convolutional Neural Networks for Flood Image Classification and Semantic Segmentation
- Author
-
Pally, Jaku Rabinder Rakshit
- Subjects
- Big Data; Computer Vision; Convolutional Neural Networks; Object Detection; Edge Detection; Flood Label Detection; Floodwater Level Estimation; Flood Severity Estimation; Image Processing;, Computer and Systems Architecture, Data Storage Systems, Risk Analysis
- Abstract
Floods are among the most destructive natural hazards that affect millions of people across the world leading to severe loss of life and damage to property, critical infrastructure, and the environment. Deep learning algorithms are exceptionally valuable tools for collecting and analyzing the catastrophic readiness and countless actionable flood data. Convolutional neural networks (CNNs) are one form of deep learning algorithms widely used in computer vision which can be used to study flood images and assign learnable weights and biases to various objects in the image. Here, we leveraged and discussed how connected vision systems can be used to embed cameras, image processing, CNNs, and data connectivity capabilities for flood label detection. We built a training database service of >9000 images (image annotation service) including the image geolocation information by streaming relevant images from social media platforms, South Carolina Department of Transportation (SCDOT) 511 traffic cameras, the US geological Survey (USGS) live river cameras, and images downloaded from search engines. All these images were manually annotated to train the different models and detect a total of eight different object categories. We then developed a new python package called “FloodImageClassifier” to classify and detect objects within the collected flood images. “FloodImageClassifier” includes various CNNs architectures such as YOLOv3 (You look only once version 3), Fast R-CNN (Region-based CNN), Mask R-CNN, SSD MobileNet (Single Shot MultiBox Detector MobileNet), and EfficientDet (efficient object detection) to perform both object detection and segmentation simultaneously. Canny edge detection and aspect ratio concepts are also included in the package for flood water level estimation and classification. The pipeline is smartly designed to train a large number of images and calculate flood water levels and inundation areas which can be used to identify flood depth, severity, and risk. “FloodImageClassifier” can be embedded to the USGS live river cameras or 511 traffic cameras to monitor river and road flooding conditions and provide early intelligence to decision makers and emergency response authorities in real-time.
- Published
- 2021
5. A GPU-accelerated MRI sequence simulator for differentiable optimization and learning
- Author
-
Rakshit, Somnath
- Subjects
- EPG, MRI, Simulator, Parallelization, Auto-differentiable
- Abstract
The Extended Phase Graph (EPG) Algorithm is a powerful tool for magnetic resonance imaging (MRI) sequence simulation and quantitative fitting, but simulators based on EPG are mostly written to run on CPU only and (with some exception) are poorly parallelized. A parallelized simulator compatible with other learning-based frameworks would be a useful tool to optimize scan parameters, estimate tissue parameter maps, and combine with other learning frameworks. In this thesis, I present our work on an open source, GPU-accelerated EPG simulator written in PyTorch. Since the simulator is fully differentiable by means of automatic differentiation, it can be used to take derivatives with respect to sequence parameters, e.g. flip angles, as well as tissue parameters, e.g. T₁ and T₂. Here, I describe the simulator package and demonstrate its use for a number of MRI tasks including tissue parameter estimation and sequence optimization.
- Published
- 2021
6. Towards Robust Gaze Estimation and Classification in Naturalistic Conditions
- Author
-
Kothari, Rakshit S
- Subjects
- Classification, Eyetracking, Gaze estimation, Segmentation
- Abstract
Eye movements help us identify when and where we are fixating. The location under fixation is a valuable source of information in decoding a person’s intent or as an input modality for human-computer interaction. However, it can be difficult to maintain fixation under motion unless our eyes compensate for body movement. Humans have evolved compensatory mechanisms using the vestibulo-ocular reflex pathway which ensures stable fixation under motion. The interaction between the vestibular and ocular system has primarily been studied in controlled environments, with comparatively few studies during natural tasks that involve coordinated head and eye movements under unrestrained body motion. Moreover, off-the-shelf tools for analyzing gaze events perform poorly when head movements are allowed. To address these issues we developed algorithms for gaze event classification and collected the Gaze-in-Wild (GW) dataset. However, reliable inference of human behavior during in-the-wild activities depends heavily on the quality of gaze data extracted from eyetrackers. State of the art gaze estimation algorithms can be easily affected by occluded eye features, askew eye camera orientation and reflective artifacts from the environments - factors commonly found in unrestrained experiment designs. To inculcate robustness to reflective artifacts, our efforts helped develop RITNet, a convolutional encoder-decoder neural network which successfully segments eye images into semantic parts such as pupil, iris and sclera. Well chosen data augmentation techniques and objective functions combat reflective artifacts and helped RITNet achieve first place in OpenEDS’19, an international competition organized by Facebook Reality Labs. To induce robustness to occlusions, our efforts resulted in a novel eye image segmentation protocol, EllSeg. EllSeg demonstrates state of the art pupil and iris detection despite the presence of reflective artifacts and occlusions. While our efforts have shown promising results in developing a reliable and robust gaze feature extractor, convolutional neural networks are prone to overfitting and do not generalize well beyond the distribution of data it was optimized on. To mitigate this limitation and explore the generalization capacity of EllSeg, we acquire a wide distribution of eye images sourced from multiple publicly available datasets to develop EllSeg-Gen, a domain generalization framework for segmenting eye imagery. EllSeg-Gen proposes four tests which allow us to quantify generalization. We find that jointly training with multiple datasets improves generalization for eye images acquired outdoors. In contrast, specialized dataset specific models are better suited for indoor domain generalization.
- Published
- 2021
7. Three-dimensional Morphological Analysis of Normative and Manipulated Carpal Tunnel
- Author
-
Shah, Rakshit Dixitkumar
- Subjects
- Biomechanics, Biomedical Engineering, Biomedical Research, Morphology, Wrist, Carpal Tunnel, Three-dimesional Analysis, Robot-Assisted Ultrasonography, Computed Tomography, Kinematics, Carpal Arch, Bone Arch
- Abstract
The carpal tunnel is a complex structure that can be partitioned into two arches: ligament arch that is formed by transverse carpal ligament, and bone arch that is formed by carpal bones. The abnormalities associated with these arches leads to a reduction in carpal tunnel space available for median nerve, causing entrapment of the nerve which is commonly known as carpal tunnel syndrome. Hence, the examination of individual arches’ space could assist in prognosis of various wrist pathomorphological conditions. Also, the available space for the median nerve during carpal tunnel syndrome can be augmented noninvasively by application of external radioulnar wrist compressive forces. However, little is known about the underlying effects of wrist compressive forces on individual arches’ morphology.The overall goal of this dissertation was to investigate the three-dimensional morphology of ligament and bone arches in normative and manipulated carpal tunnel. Morphological analysis of the normative tunnel revealed that the bone arch occupies more tunnel space than the ligament arch at all regions, but the ligament arch was the key contributor to the unequal carpal tunnel space across regions. Also, the biomechanical manipulation of the wrist in radio-ulnar direction resulted in an increase in ligament arch space and decrease in bone arch space, leading to a decrease in carpal tunnel space across regions. The investigations of these basic science studies advance our knowledge of carpal tunnel morphometry and mechanics. The partitional tunnel morphometric knowledge is valuable in understanding the regional abnormalities in tunnel morphology associated with various wrist pathological conditions. Additionally, due to the delicate positioning of the median nerve beneath the ligament, augmentation of ligament arch space by the non-invasive biomechanical manipulation strategy could potentially decompress the median nerve and relieve symptoms associated with carpal tunnel syndrome.
- Published
- 2019
8. Generalized Learning Models for Structured Data
- Author
-
Agrawal, Rakshit
- Subjects
- Computer science, aggregations, graphs, learning from structures, machine learning, neural networks, structures
- Abstract
Structures are present in almost everything around us. In most of the systems that we interact with, or the way we interact with them, some emergent structure can often be observed. A simple sentence is a sequence of words. A small classroom of interacting students can be depicted as a network with each student defining a node of it. The emergent structures, therefore, highlight the inter-relatedness of different entities within systems, where while each entity has a significant individuality, it is also a component of a larger structure. This structural information, combined with the individual knowledge, can assist the task of learning properties in such systems. On a social network, for instance, we can learn link related properties between users by learning from the users as well as the graph of several users on the same network. Similarly, in an interactive sequence of a click-stream on a system, using the ordered information of these click actions, we may be able to learn the intent of a user performing the clicks.In this dissertation, we present the concept, methodology, and experiments for performing generalized learning from structural data. We discuss the emergence of structures within datasets, and the entire approach to learn from those structures. We provide a methodology for capturing the structures and assembling the information hidden within the structures. We present a concept of neural aggregation which helps combine information from complex structures while ensuring the learning capability of the models. We present several neural network based architectures for learning different properties from sequential and graphical structures. The dissertation provides the general approach, as well as specific learning frameworks for problems and datasets across several application domains.
- Published
- 2019
9. EFFICIENT SECURITY IN EMERGING MEMORIES
- Author
-
Rakshit, Joydeep
- Abstract
The wide adoption of cloud computing has established integrity and confidentiality of data in memory as a first order design concern in modern computing systems. Data integrity is ensured by Merkle Tree (MT) memory authentication. However, in the context of emerging non-volatile memories (NVMs), the MT memory authentication related increase in cell writes and memory accesses impose significant energy, lifetime, and performance overheads. This dissertation presents ASSURE, an Authentication Scheme for SecURE (ASSURE) energy efficient NVMs. ASSURE integrates (i) smart message authentication codes with (ii) multi-root MTs to decrease MT reads and writes, while also reducing the number of cell writes on each MT write. Whereas data confidentiality is effectively ensured by encryption, the memory access patterns can be exploited as a side-channel to obtain confidential data. Oblivious RAM (ORAM) is a secure cryptographic construct that effectively thwarts access-pattern-based attacks. However, in Path ORAM (state-of-the-art efficient ORAM for main memories) and its variants, each last-level cache miss (read or write) is transformed to a sequence of memory reads and writes (collectively termed read phase and write phase, respectively), increasing the number of memory writes due to data re-encryption, increasing effective latency of the memory accesses, and degrading system performance. This dissertation efficiently addresses the challenges of both read and write phase operations during an ORAM access. First, it presents ReadPRO (Read Promotion), which is an efficient ORAM scheduler that leverages runtime identification of read accesses to effectively prioritize the service of critical-path-bound read access read phase operations, while preserving all data dependencies. Second, it presents LEO (Low overhead Encryption ORAM) that reduces cell writes by opportunistically decreasing the number of block encryptions, while preserving the security guarantees of the baseline Path ORAM. This dissertation therefore addresses the core chal- lenges of read/write energy and latency, endurance, and system performance for integration of essential security primitives in emerging memory architectures. Future research directions will focus on (i) exploring efficient solutions for ORAM read phase optimization and secure ORAM resizing, (ii) investigating the security challenges of emerging processing-in-memory architectures, and (iii) investigating the interplay of security primitives with reliability enhancing architectures.
- Published
- 2018
10. Injury and Impact Responses of the Abdomen Subjected to Seatbelt Loading
- Author
-
Ramachandra, Rakshit
- Subjects
- Biomedical Engineering, Abdomen, Seatbelt, Vascular pressure, Motor vehicle crashes, Thor NT, NASS
- Abstract
Past research has shown that abdominal injuries account for nearly five percent of all injuries that occur during motor vehicle collisions (MVC) and rank in the top five compared to all body regions. In order to prevent injury to the abdomen, it is necessary to understand the mechanical response of the abdomen under various loading modes. While many studies have looked into mechanical responses of abdomen by subjecting human surrogates to impact conditions such as rigid-bar, seatbelt and airbags, an agreeable correlation between abdominal injuries and injury metric that is not readily available. Such a scenario translates directly to the lack of an injury predictive biofidelic abdomen such as in the Hybrid III (H-III) anthropomorphic test device (ATD), which is currently in use for crash safety regulation. With the rise in use of finite element (FE) models to mimic real world crash scenarios, it is necessary to correlate the virtual human models to injury response and tolerance data from laboratory tests before using them to investigate occupant safety. To address the concerns described above, this study first identified the frequency and severity of abdominal injuries in MVCs with updated case years using the National Automotive Sampling System (NASS) Crashworthiness Data System (CDS) database. Then, the response of post mortem human surrogates (PMHS) was investigated using a simplified and controllable method such as belt loading to the abdomen. The same fixture was employed to test the current state of art abdomen inserts for ATDs to understand their responses to similar loading. A full body human FE model from the Global Human Body Models Consortium (GHBMC) was subjected to seatbelt loading similar to the experimental setup to identify the gaps between experimental and analytical outcomes. Based on the NASS analysis, nearly 18,000 adult occupants sustain abbreviated injury scale (AIS) greater than two abdominal injuries in frontal and side crashes, with more than half of these occurring in frontal crashes alone. An increase in the risk of AIS2+ injury to abdominal organs was observed with increasing crash severity, however the risk remained fairly constant across all age groups. While belted occupants were at lower risk of abdominal injuries compared to the unbelted occupants, it is unclear if the lap belt penetrating into the abdomen was the source of injury for belted occupants. When analyzing the risk of injuries to solid organs such as spleen and liver, the odds of AIS2+ injury occurring to these organs is more when the occupant also sustains AIS2+ rib fractures. However, the occurrence of solid organ injuries in the absence of rib fractures highlights the need for a separate injury criteria for abdomen which does not exist at this time.A total of seven unembalmed PMHS, with an average mass and stature of 71 kg and 174 cm respectively were subjected to belt loading using a seatbelt pull mechanism, with the PMHS seated upright in a free-back configuration. A pneumatic piston pulled a seatbelt into the abdomen at the level of umbilicus with a nominal penetration speed of 4.0 m/s. Pressure transducers were placed in the re-pressurized abdominal vasculature, including the inferior vena cava (IVC) and abdominal aorta, to measure internal pressure variation during the event. Jejunum tear, colon hemorrhage, omentum tear, splenic fracture and transverse processes fracture were identified during post-test anatomical dissection. Peak abdominal forces ranged from 2.8 to 4.7 kN. Peak abdominal penetrations ranged from 110 to 177 mm. A force-penetration corridor was developed from the PMHS tests in an effort to benchmark ATD biofidelity. Peak aortic pressures ranged from 30 to 104 kPa and peak IVC pressures ranged from 36 to 65 kPa. A pressure based abdominal injury risk function (IRF) was developed and abdominal injury criteria such as vascular P¿max and Pmax P¿max, that exhibited a strong relationship with abdominal injuries, are proposed.Using the same test apparatus, Hybrid III 50th male ATD retrofitted with rate-sensitive gel-filled abdomen developed by Rouhana et al. (2010), Thor NT ATDs with standard abdomen and a prototype abdomen proposed by Hanen et al. (2012) were tested. Force-penetration results were compared to the PMHS response. The peak pressure and P¿max values of the Thor NT abdomen were compared to the values from PMHS tests. The retrofitted Hybrid III had a peak force of 3.5 kN with a peak penetration of 95 mm under the same input condition as PMHS tests. Thor NT with standard abdomen had a peak force of 5 kN with peak penetration of 100 mm. Thor NT retrofitted with prototype abdomen had an average peak force and penetration of 4.6 kN with 85 mm respectively. The pressure values from the prototype abdomen ranged from 149 to 165 kPa. The force-penetration from the Thor NT abdomen tests show a similar initial trend as the PMHS test, although peak force occurred at a lesser penetration compared to the PMHS. The retrofitted Hybrid III displayed a stiff initial response followed by unloading sooner compared to the Thor NT. The P¿max values calculated in pressure cylinders of the Thor NT ATD prototype abdomen corresponded to a 70% risk of abdominal injury based on the IRF developed in the PMHS studies.
- Published
- 2016
11. Communication Efficient Decentralized Information Fusion in Sensor Networks
- Author
-
Allamraju, Rakshit Dayal
- Abstract
Many physical, geological and environmental phenomena are spread over large spatio-temporal scales and require robust sensor network systems to gather data, collaborate and monitor the area for anomaly detection. In decentralized sensor networks, deployed in monitoring such events, agents are constrained to monitor localized regions and collaborate with other agents to obtain a common global model. While such sensor networks yield significant benefits such as endurance and scalability, they are constrained in amount of on-board resources available such as computational ability, communication bandwidth and fuel. In this work the problem of decentralized functional inference over a spatially separated sensor network with limited communication capability is studied. Gaussian Processes (GPs) have been studied as priors over spatially distributed functions. The key benefit of GPs is that the number and location of regression kernels, and the numerical values of associated weights, are simultaneously inferred from the underlying data. While this enables the model to adapt its structure based on the data, it also makes decentralized inference using Consensus type algorithms difficult if agents do not know each-other's kernel selection a-priori, and traditional sample based inference is communication-inefficient if all of the data is shared between agents. A new decentralized communication-efficient algorithm for decentralized information fusion over GPs is presented of which the key contribution is that agents do not have to a-priori know each-other's kernel selections. Instead, our algorithm enables agent's to build a global GP model by fusing together local compressed GPs. To further prevent unnecessary communication, the presented algorithm can utilize information theoretic measures on value-of-information to initiate broadcasts only when agents have sufficient new information. The algorithm is compared with several state of the art methods on real-world and synthetic datasets, and is shown to lead to efficient estimation accuracy with decreased communication cost, without having to assume that agents share a common set of kernels. To further evaluate the real time performance of the presented method, a multi-agent simulator system is developed as a test bench. Experiments for decentralized multi-agent planning, which use the described fusion algorithm, are conducted and the respective results are provided.
- Published
- 2015
12. Three Essays in Applied Time Series Econometrics
- Author
-
Rakshit, Atanu
- Subjects
- Kalman Filter, Uncertainty, Exchange rates, VAR, Threshold, Deficit, Interest Rates
- Abstract
This dissertation is comprised of four chapters. Chapter 1 provides an introduction toEconomic application of time series analysis and discusses the topics covered in each of the following chapters along with some main results therein. In Chapter 2, I construct a measure of information asymmetry in the financial markets in U.S., by estimating an index of agency cost pertaining to U.S. manufacturing firms. The cyclical behavior of the unobservable agency cost is derived by a novel application of the Kalman filter within a Bayesian framework, using firm level data from 1984-2006. The preliminary results provide support to the financial accelerator mechanism in the business cycle literature. In Chapter 3, I show that people\'s expectation of uncertainty in financial markets is a significant factor impacting short-term real exchange rate movements. Specifically, a sudden increase in expectation of stock market volatility in a low interest rate country tends to appreciate their currencies against high interest rate currencies. I construct a measure of conditional expected uncertainty from volatility of returns of the dominant portfolio (indices) of 7 industrialized countries. I identify uncertainty shocks and its impact on dollar real exchange rate, and explain my results in the context of currency carry trade. Chapter 4 of my dissertation documents the presence of significant non-linearity in the deficit-interest rate relationship in the U.S. economy. Using an asymptotic threshold test as per Hansen (2000), I find strong evidence for threshold effects in the impact of expected deficit on future long-term interest rates. I find that a percentage point increase in expected deficit in a regime where the expected deficit/GDP ratio is above 1.8 percent (the estimated threshold value) increases future nominal long term interest rates by 29-30 basis point, and a "news shock" to expectation of future deficit increases future real long term interest rates by 12-18 basis points. When expected deficit/GDP ratio is below 1.8 percent, an increase in expected deficit has no impact on future long-term interest rates.
- Published
- 2013
13. Design and Implementation of a Hybrid Technology Networking System
- Author
-
Rakshit, Sushanta Mohan
- Subjects
- Computer Engineering, Digital Communications and Networking
- Abstract
The safety of rail transport has always been the top priority for the FederalRailroad Administration (FRA). Legacy technology, like wayside monitoring, is stillin place and is largely relied upon for detection of faults. Modern technology likeRadio Frequency Identification (RFID) has been introduced recently. However, thisis largely used to detect a particular railcar rather than to monitor it for problems.Wireless Sensor Network (WSN) technology is being evaluated by the railroads for real-time or near real-time monitoring of the status of railcars for timely response to problems and also for trend analysis. ZigBee has been the networking protocol of choice for the railroads for its lowpower consumption and cost of implementation. The railroad scenario presents a long linear-chain like network topology which ZigBee was not designed to handle. It has been found that a ZigBee-only network in the railroad environment suffers from drawbacks like long synchronization delays, severe problems with route discovery and maintenance, aggregation of data errors leading to unacceptable packet loss rates, lack of a mechanism to decide traffic priority for critical packets, like alarm, so that they can reliably traverse the network to the collecting node in the locomotive etc. Hybrid Technology Networking (HTN) protocol has been suggested whichaddresses the shortcomings of ZigBee in the railroad scenario. It proposes astandards-based multi-protocol approach that is well-suited for the railroad scenario. The current crop of sensor platforms does not provide an integrated environment for the implementation of HTN. In this research work an integrated hardware platform for the implementationof the HTN protocol is designed and implemented. The guiding principle has been the adherence to standards. The test results using the hardware show that it provides inter-operability with available sensor platforms, can interface with other sensing hardware using standard protocols and provides communication capabilities exceeding that needed by HTN. Advisor: Hamid R. Sharif-Kashani
- Published
- 2013
14. Access Games: A Game Theoretic Framework For Fair Bandwidth Sharing In Distributed Systems
- Author
-
Rakshit, Sudipta
- Subjects
- MAC Protocols, fairness, throughput, non-cooperative games, Computer Sciences
- Abstract
In this dissertation, the central objective is to achieve fairness in bandwidth sharing amongst selfish users in a distributed system. Because of the inherent contention-based nature of the distributed medium access and the selfishness of the users, the distributed medium access is modeled as a non-cooperative game; designated as the Access Game. A p-CSMA type medium access scenario is proposed for all the users. Therefore, in the Access Game, each user has two actions to choose from: "transmit" and "wait". The outcome of the Access Game and payoffs to each user depends on the actions taken by all the users. Further, the utility function of each user is constructed as a function of both Quality of Service (QoS) and Battery Power (BP). Various scenarios involving the relative importance of QoS and BP are considered. It is observed that, in general the Nash Equilibrium of the Access Game does not result into fairness. Therefore, Constrained Nash Equilibrium is proposed as a solution. The advantage of Constrained Nash Equilibrium is that it can be predicated on the fairness conditions and the solution will be guaranteed to result in fair sharing of bandwidth. However, Constrained Nash Equilibrium is that it is not self-enforcing. Therefore, two mechanisms are proposed to design the Access Game in such a way that in each case the Nash Equilibrium of the Access Game satisfies fairness and maximizes throughput. Hence, with any of these mechanisms the solution of the Access Game becomes self-enforcing.
- Published
- 2005
15. Finite Element Analysis of Problems in Topology Optimization
- Author
-
Rakshit, Abhik
- Subjects
- topology optimization, checkerboard patterns
- Abstract
Topology optimization is fast emerging as an integral part of the product development cycle using Computer Aided Engineering tools. The optimal structure and shape of a product can be predicted in the initial stages of a development cycle using topology optimization. The goal of topology optimization is to find the best distribution of material for a structure such that an objective criterion, like global stiffness, takes on an extremum value subject to given constraints. These constraints are typically placed on the volume. In this thesis, some of the numerical issues that occur in the solution of a topology optimization problem are discussed. These numerical issues include the formation of checkerboard patterns in the final topology and sensitivity of the optimal solution to the mesh size used to discretize a domain. A computationally cheaper heuristic filtering scheme to counter these numerical instabilities is studied. The effects of non-conforming or discontinuous Galerkin finite element formulations to solve problems in topology optimization are also studied. Several numerical experiments involving the use of bilinear and biquadratic finite elements for the solution of the topology optimization problem are presented. In addition, an application area referred to as the “inverse homogenization procedure” using the topology optimization procedure for the design of materials with prescribed material properties is examined.
- Published
- 2003
16. Optimum design of gear shaper cutters
- Author
-
Rakshit, Debkumar.
- Subjects
- Gearing, Spur, Gear-shaving machines, Gear-shaping machines
- Published
- 1989
17. Nutritional Factors Modifying Fatty Liver Hemorrhagic Syndrome in Caged Laying Hens
- Author
-
Rakshit, Chandi C.
- Subjects
- Animal Sciences, Life Sciences, Poultry or Avian Science
- Abstract
Three experiments each with two phases were conducted to study the effect of different levels of distillers dried grain (DDG), fat, oats and a mixture of oats and DDG on Fatty Liver Hemorrhagic Syndrome (FLHS) in caged laying hens. The second phase of each experiment was conducted by force-feeding the same diets used in the first phase at 120% of their normal intakes. This produces FLHS and thereby permits studying the effect of diets. During the force-feeding period of the third experiment, serum estradiol levels of hens from each treatment and each type of feeding were measured to study the effect of endogenous estradiol level on FLHS. The 30% DDG in the diet was more effective against FLHS than when present at the 20% level. Addition of 5 to 10% grease in the diet decreased feed consumption significantly (P
- Published
- 1986
18. Effects of Diets on Fatty Liver Hemorrhagic Syndrome (FLHS)
- Author
-
Rakshit, Chandi C.
- Abstract
Two experiments were conducted to study the effect of fermentation by-products and oats on production parameters and Fatty Liver Hemorrhagic Syndrome in caged laying hens. Also tested in this study was the effect of force-feeding at 120% of normal intake f or experimentally producing FLHS. The first experiment was conducted with sixty hens on three diets at twenty birds per diet of Corn and soybean meal were used as the chief energy and protein sources in diet l. For diet 2, 10% of a product containing distillers dried grain (30%) with corn distillers solubles (70%) was added, and in diet 3, oats and soybean meal were used as the chief energy and protein sources. Data were recorded on production, mortality and feed consumption for each of seven periods of 28 days. The data from Experiment one showed that distillers dried grain can be used satisfactorily in layer diets. In Experiment 2, 50% of the birds on each diet were force-fed at 120% of the normal intake for a period of three weeks by the method described by Wolford and Polin (1972c). The rest continued to be fed on an ad libitum basis. The experimental data showed that force-feeding at 120% of the normal intake was satisfactory for experimentally producing FLHS. The liver lipid contents of the force-fed birds were three times as high as their normal counterparts. Egg production was reduced significantly in the force-fed birds. The ferment at ion by-product was found to have little beneficial effect in preventing FLHS. However, oats showed a significant effect in reducing liver fat content and hemorrhage, thereby preventing the disease. Gross pictures of the livers show different color variations which corresponded with their lipid content. The lesser the lipid content, the darker the color of the liver. Microscopically (both light and electron), the most discernible difference found between the fatty and normal livers was the size of the fat droplets, thereby indicating the amount of fat infiltration. The main objectives of this experiment were: (1) To measure the effect of a fermentation by-product and oats on egg production and liver parameters; and (2) To measure the effect of superimposing FLHS on the above by force-feeding at the rate of 120% of the control intake for three weeks.
- Published
- 1981
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.