344 results on '"Dongwoo Kang"'
Search Results
202. Compartmental approach to assess bioequivalence compared to the noncompartmental approach
- Author
-
Renhua Zheng, Sung Eun Kim, Dongwoo Kang, and Bo-Hyung Kim
- Subjects
Tiropramide ,Population ,Cmax ,Organophosphonates ,Bioequivalence ,01 natural sciences ,03 medical and health sciences ,0302 clinical medicine ,Statistics ,medicine ,Humans ,Pharmacology (medical) ,education ,Analysis method ,Mathematics ,Pharmacology ,education.field_of_study ,010405 organic chemistry ,Adenine ,Finasteride ,Small sample ,Confidence interval ,0104 chemical sciences ,Therapeutic Equivalency ,Sample size determination ,030220 oncology & carcinogenesis ,Tyrosine ,Monte Carlo Method ,medicine.drug - Abstract
OBJECTIVE The objective of this study is to evaluate the relative performance of individual or population compartmental analysis (ICA or PCA) vs. noncompartmental analysis (NCA) in estimating the systemic exposures of drugs to assess bioequivalence (BE) between original and generic formulations in the case of limited datasets. METHODS BE study data of adefovir, finasteride, and tiropramide were chosen. The analyses were performed for the 1) original dataset, 2) limited dataset with small size for which the number of subjects was decreased to half, and 3) limited dataset with minimal-sampling timepoint of 9 samples. As for NCA and ICA, the Cmax and AUCinf were estimated using WinNonlin®. The PCA was implemented in NONMEM® and then Monte Carlo simulation was utilized to generate 10,000 sets of Cmax and AUCinf. RESULTS The 90% confidence intervals (CIs) of the original datasets of the 3 drugs were all within BE acceptance criteria regardless of the analysis method. For small-sample-size datasets of adefovir and finasteride, BE results were maintained. In tiropramide, the lower boundary of CI computed from ICA or PCA results was less than 0.800 for the 3 small sample sizes (n = 22, 16, 10), but that of NCA results was less than 0.800 for only the smallest sample size (n = 10). As for the minimal-sampling timepoint, results were within the BE acceptance criteria for all of the 3 analyses. CONCLUSIONS Compartmental approaches can provide a complementary method for BE assessment, as well as being used for restricted-design studies.
- Published
- 2016
203. Efficient strategy for obtaining reliable pharmacokinetic parameters in population compartmental approaches
- Author
-
Dongwoo Kang, Bo Hyung Kim, Junhee Lee, and Euitae Kim
- Subjects
Pharmacology ,education.field_of_study ,Estimation theory ,Population ,Datasets as Topic ,Peripheral compartment ,PK Parameters ,Models, Biological ,Pharmacokinetics ,Statistics ,Plasma concentration ,Humans ,Pharmacology (medical) ,Antipsychotic drug ,education ,Absorption rate constant ,Mathematics - Abstract
Objective This study aimed to suggest efficient strategies for obtaining reliable pharmacokinetic (PK) parameters in population compartmental approach (PCA) a early-phase or resource-limited clinical trials with limited data. Methods This study employed plasma concentration of olanzapine, an antipsychotic drug, from a bioequivalence study. To assess bias and precision of PK parameters that were estimated from limited data, this study utilized simulations with the generation of small-size datasets (SSD) and minimal-sampling datasets (MSD) that consisted of limited volunteers and PK samplings per volunteer, respectively. Results Clearance (CL) estimates were the most robust, volume of the central (Vc) and peripheral compartment (Vp) were moderately affected, and absorption rate constant (Ka) and intercompartmental clearance (Q) were very sensitive with limited dataset. MSD had more impact on the bias and precision of PK parameter estimation than SSD. Conclusions Performance of PK parameter estimation evaluated by bias and precision from simulation datasets was better in SSD than MSD. This finding implies that collecting more PK samplings is a more efficient strategy than recruiting more volunteers in order to obtain informative results in performing PCA.
- Published
- 2016
204. Eye Tracking based Glasses-free 3D Display by Dynamic Light Field Rendering
- Author
-
Yoon-sun Choi, Byungmin Kang, Jingu Heo, Dongkyung Nam, Hyoseok Hwang, Dongwoo Kang, Juyong Park, Kyu-hwan Choi, Seok Lee, and Jin-Ho Lee
- Subjects
Light field rendering ,Image quality ,business.industry ,Computer science ,Stereo display ,Time based ,Rendering (computer graphics) ,Autostereoscopy ,Computer graphics (images) ,Eye tracking ,Computer vision ,Artificial intelligence ,business ,Light field - Abstract
Glasses-free 3D display is developed using dynamic light field rendering algorithm in which light field information is mapped in real time based on 3D eye position. We implemented 31.5″ and 10.1″ prototypes.
- Published
- 2016
205. Feasibility of Eye-tracking based Glasses-free 3D Autostereoscopic Display Systems for Medical 3D Images
- Author
-
Hyoseok Hwang, Yoon-sun Choi, Jingu Heo, Juyong Park, Jin-Ho Lee, Dongwoo Kang, Kyu-hwan Choi, Seok Lee, Byongmin Kang, and Dongkyung Nam
- Subjects
Light field rendering ,Computer science ,business.industry ,Process (computing) ,Stereoscopy ,Stereo display ,law.invention ,law ,Clinical diagnosis ,Autostereoscopy ,Computer graphics (images) ,Eye tracking ,Imaging diagnosis ,Computer vision ,Artificial intelligence ,business - Abstract
Medical image diagnosis processes with stereoscopic depth by 3D display have not been developed widely yet and remain understudied Many stereoscopic displays require glasses that are inappropriate for use in clinical diagnosis/explanation/operating processes in hospitals. An eye-tracking based glasses-free three-dimensional autostereoscopic display monitor system has been developed, and its feasibility for medical 3D images was investigated, as a cardiac CT 3D navigator. Our autostereoscopic system uses slit-barrier with BLU, and it is combined with our vision-based eye tracking system to display 3D images. Dynamic light field rendering technique is applied with the 3D coordinates calculated by the eye-tracker, in order to provide a single viewer the best 3D images with less x-talk. To investigate the feasibility of our autostereoscopic system, 3D volume was rendered from 3D coronary CTA images (512 by 512 by 400). One expert reader identified the three main artery structures (LAD, LCX and RCA) in shorter time than existing 2D display. The reader did not report any eye fatigue or discomfort. In conclusion, we proposed a 3D cardiac CT navigator system with a new glasses-free 3D autostereoscopy, which may improve diagnosis accuracy and fasten diagnosis process.
- Published
- 2016
206. Oxidation Resistance of Iron and Copper Foils Coated with Reduced Graphene Oxide Multilayers
- Author
-
Jee Youn Kwon, Hyun Sick Hwang, Rodney S. Ruoff, Jae-Hyoung Sim, Hyeon Suk Shin, Yong Jung Kim, Chul Su Kim, Dongwoo Kang, and Hyun Cho
- Subjects
Materials science ,Scanning electron microscope ,Iron ,Inorganic chemistry ,Oxide ,General Physics and Astronomy ,chemistry.chemical_element ,engineering.material ,law.invention ,Metal ,symbols.namesake ,chemistry.chemical_compound ,Coating ,law ,Materials Testing ,General Materials Science ,Particle Size ,Graphene oxide paper ,Graphene ,General Engineering ,Oxides ,Copper ,Nanostructures ,Corrosion ,Oxygen ,chemistry ,visual_art ,visual_art.visual_art_medium ,symbols ,engineering ,Graphite ,Adsorption ,Raman spectroscopy ,Oxidation-Reduction - Abstract
Protecting the surface of metals such as Fe and Cu from oxidizing is of great importance due to their widespread use. Here, oxidation resistance of Fe and Cu foils was achieved by coating them with reduced graphene oxide (rG-O) sheets. The rG-O-coated Fe and Cu foils were prepared by transferring rG-O multilayers from a SiO(2) substrate onto them. The oxidation resistance of these rG-O-coated metal foils was investigated by Raman spectroscopy, optical microscopy, and scanning electron microscopy after heat treatment at 200 °C in air for 2 h. The bare metal surfaces were severely oxidized, but the rG-O-coated metal surfaces were protected from oxidation. This simple solution process using rG-O is one advantage of the present study.
- Published
- 2012
207. A fact-oriented ontological approach to SAO-based function modeling of patents for implementing Function-based Technology Database
- Author
-
Kwangsoo Kim, Joohyung Lim, Dongwoo Kang, and Sungchul Choi
- Subjects
Database ,Computer science ,Generalization ,media_common.quotation_subject ,General Engineering ,Information repository ,Semantics ,computer.software_genre ,Computer Science Applications ,law.invention ,Patent analysis ,Artificial Intelligence ,law ,Component (UML) ,Key (cryptography) ,Ontology ,TRIZ ,Data mining ,Function (engineering) ,computer ,media_common - Abstract
Highlights? Function-Oriented Search (FOS) is a new tool for searching patent to find solutions to new problems. ? Function-based Technology Database (FTDB) is a key component of FOS. ? We suggests a fact-oriented ontological approach to implementing an FTDB. ? The proposed approach implements an FTDB for an SAO-based patent retrieval system to support FOS. ? We verified the feasibility of the approach by using it to conduct case studies of patent retrieval. Function-Oriented Search (FOS) has been proposed as a tool for use in searching patent databases to find existing solutions to new problems. To implement FOS effectively, a well-structured Function-based Technology Database (FTDB) is required. An FTDB is a data repository of technology information represented as "function". To implement an FTDB, four features should be addressed: continual data updating, limited area searching, function generalization, and semantics handling. In this paper, we consider these features to suggest a fact-oriented ontological approach to implementing an FTDB by Subject-Action-Object (SAO)-based function modeling of patents. The proposed approach uses fact-oriented ontology modeling of SAO structures extracted from patent documents, and implements an FTDB, which is an SAO-based patent retrieval system to support FOS. We also verify the feasibility of the proposed approach to by using it to conduct case studies of patent retrieval.
- Published
- 2012
208. Control of size and physical properties of graphene oxide by changing the oxidation temperature
- Author
-
Dongwoo Kang and Hyeon Suk Shin
- Subjects
Materials science ,Renewable Energy, Sustainability and the Environment ,Graphene ,Process Chemistry and Technology ,Organic Chemistry ,Inorganic chemistry ,Graphene foam ,Oxide ,Energy Engineering and Power Technology ,chemistry.chemical_element ,Graphite oxide ,law.invention ,Inorganic Chemistry ,chemistry.chemical_compound ,chemistry ,Chemical engineering ,law ,Materials Chemistry ,Ceramics and Composites ,Graphite ,Surface charge ,Carbon ,Graphene oxide paper - Abstract
The size and the physical properties of graphene oxide sheets were controlled by changing the oxidation temperature of graphite. Graphite oxide (GO) samples were prepared at different oxi- dation temperatures of 20 o C, 27 o C and 35 o C using a modified Hummers' method. The carbon- to-oxygen (C/O) ratio and the average size of the GO sheets varied according to the oxidation temperature: 1.26 and 12.4 μm at 20 o C, 1.24 and 10.5 μm at 27 o C, and 1.18 and 8.5 μm at 35 o C. This indicates that the C/O ratio and the average size of the graphene oxide sheets respectively in- crease as the oxidation temperature decreases. Moreover, it was observed that the surface charge and optical properties of the graphene oxide sheets could be tuned by changing the temperature. This study demonstrates the tunability of the physical properties of graphene oxide sheets and shows that the properties depend on the functional groups generated during the oxidation process.
- Published
- 2012
209. Does Schwabe's Hypothesis Hold in Korea: Empirical Evidence from the Characteristics Changes of Homeownership Demand in Seoul Metropolitan Area
- Author
-
Seong-Woo Lee, Dongwoo Kang, and Hyun-Joong Kim
- Subjects
education.field_of_study ,Housing tenure ,Probit model ,Development economics ,Population ,Economics ,Demographic economics ,Probit ,Census ,Empirical evidence ,education ,Metropolitan area - Abstract
The Schwabe's law explains the housing demand weighs more on demographic factors rather than on socio-economic factors as societies achieve higher level of economic development. Based on Schwabe's law, the present study constructs a hypotheses to analyze changes of housing demand with respect to housing tenure change in Seoul Metropolitan Area (SMA) during the periods of 1980 to 2005. To test the hypotheses, the authors take advantage of the Population and Housing Census 2% data from 1980 to 2005. The authors apply binary probit with decomposition method to verify our hypotheses. The authors found that the influence of socio-economic factors on housing tenure have been weakened in the housing market during the periods of 1980 to 2005. On the other hand, the relative influence of demographic factors have been strengthened in the housing market during the periods. The present study concludes that housing demand in the SMA have been dramatically changed from socio-economic characteristics to demographic factors to decide housing tenure during the periods, which confirms the hypotheses of the present study.
- Published
- 2012
210. Design and simulation of a non-silicon composite interposer with reinforced structure
- Author
-
Yunna Sun, Dongwoo Kang, Yan Wang, Guifu Ding, and Yanmei Liu
- Subjects
chemistry.chemical_classification ,Materials science ,Fabrication ,Silicon ,Composite number ,chemistry.chemical_element ,Stiffness ,Polymer ,STRIPS ,law.invention ,Thermal conductivity ,chemistry ,law ,Interposer ,medicine ,Composite material ,medicine.symptom - Abstract
A novel Cu-ordered-reinforced polymer composite interposer and its reinforced structure have been proposed in this paper. The models have been designed and simulated by Comsol software. The simulation indicated that the reinforced structure, which has four strips extending into the polymer area, improves stiffness and thermal conductivity of the composite interposer. It is proved that the copper proportion plays an important role for improving the mechanical property of the composite interposer. The Cu-ordered-reinforced polymer composite interposer with reinforced structure has potential for overcoming TSV fabrication difficulties.
- Published
- 2015
211. Highly controllable transparent and conducting thin films using layer-by-layer assembly of oppositely charged reduced graphene oxides
- Author
-
Mihee Heo, Jisook Lee, Dongwook Lee, Jin Young Kim, Byeong Su Kim, Hyeon Suk Shin, Tae Keun Hong, and Dongwoo Kang
- Subjects
Materials science ,Fabrication ,Graphene ,Layer by layer ,Oxide ,Nanotechnology ,General Chemistry ,law.invention ,chemistry.chemical_compound ,chemistry ,law ,Materials Chemistry ,Transmittance ,OLED ,Thin film ,Sheet resistance - Abstract
A new approach for the fabrication of reduced graphene oxide (rGO) multilayers which can be used for transparent and conducting thin films was developed. This was achieved by using layer-by-layer (LbL) assembly of positively and negatively charged rGO sheets, which could provide highly controllable thin films in terms of thickness, transmittance, and sheet resistance. In particular, the thickness of the multilayer thin films of rGO was able to be controlled precisely in the subnanometre scale by ∼0.46 nm via simply varying the number of stacking layers. Therefore, this method enabled an excellent control of the rGO multilayers over the optical and electrical properties, which are related to the thickness. Furthermore, we demonstrated the application of the rGO multilayers for an OLED device.
- Published
- 2011
212. Changes in Housing Tenure Rates and Housing Tenure Propensity in Seoul Metropolitan Area between 1995 and 2005
- Author
-
Dongwoo Kang and Lee, SeongWoo
- Subjects
Labour economics ,Housing tenure ,Business ,Metropolitan area - Published
- 2010
213. Autostereoscopic 3D display using directional subpixel rendering
- Author
-
Ju-Young Park, Seok Lee, Yoon-sun Choi, Hyoseok Hwang, Jin-Ho Lee, Jingu Heo, Kyu-hwan Choi, Dongwoo Kang, Byongmin Kang, and Dongkyung Nam
- Subjects
Pixel ,Computer science ,business.industry ,Image quality ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Stereo display ,01 natural sciences ,Subpixel rendering ,Atomic and Molecular Physics, and Optics ,3D rendering ,010309 optics ,Optics ,Autostereoscopy ,0103 physical sciences ,Computer vision ,Artificial intelligence ,0210 nano-technology ,business ,Light field ,ComputingMethodologies_COMPUTERGRAPHICS ,Camera resectioning - Abstract
In this paper we present an autostereoscopic 3D display using a directional subpixel rendering algorithm in which clear left-right images are expressed in real time based on a viewer's 3D eye positions. In order to maintain the 3D image quality over a wide viewing range, we designed an optical layer that generates a uniformly distributed light field. The proposed 3D rendering method is simple, and each pixel processing can be performed independently in parallel computing environments. To prove the effectiveness of our display system, we implemented 31.5" 3D monitor and 10.1" 3D tablet prototypes in which the 3D rendering is processed in the GPU and FPGA board, respectively.
- Published
- 2018
214. An improved anonymous authentication scheme for roaming in ubiquitous networks
- Author
-
Hyoungshick Kim, Jongho Moon, Hakjun Lee, Dongwoo Kang, Dongho Won, Jaewook Jung, and Dong-Hoon Lee
- Subjects
Time Factors ,Computer science ,lcsh:Medicine ,Encryption ,Social Sciences ,Cryptography ,02 engineering and technology ,computer.software_genre ,Social Networking ,Random oracle ,Sociology ,Data Anonymization ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,Identity Theft ,Password ,Multidisciplinary ,Revocation ,Data anonymization ,Applied Mathematics ,Simulation and Modeling ,Social Communication ,Biometrics ,Physical Sciences ,Telecommunications ,Engineering and Technology ,020201 artificial intelligence & image processing ,Mobile device ,Algorithms ,Research Article ,Computer and Information Sciences ,Equipment ,Research and Analysis Methods ,Computer security ,Computational Techniques ,Computer Security ,Communication Equipment ,Authentication ,business.industry ,lcsh:R ,020206 networking & telecommunications ,Communications ,lcsh:Q ,Roaming ,business ,computer ,Mathematics ,Cell Phone - Abstract
With the evolution of communication technology and the exponential increase of mobile devices, the ubiquitous networking allows people to use our data and computing resources anytime and everywhere. However, numerous security concerns and complicated requirements arise as these ubiquitous networks are deployed throughout people's lives. To meet the challenge, the user authentication schemes in ubiquitous networks should ensure the essential security properties for the preservation of the privacy with low computational cost. In 2017, Chaudhry et al. proposed a password-based authentication scheme for the roaming in ubiquitous networks to enhance the security. Unfortunately, we found that their scheme remains insecure in its protection of the user privacy. In this paper, we prove that Chaudhry et al.'s scheme is vulnerable to the stolen-mobile device and user impersonation attacks, and its drawbacks comprise the absence of the incorrect login-input detection, the incorrectness of the password change phase, and the absence of the revocation provision. Moreover, we suggest a possible way to fix the security flaw in Chaudhry et al's scheme by using the biometric-based authentication for which the bio-hash is applied in the implementation of a three-factor authentication. We prove the security of the proposed scheme with the random oracle model and formally verify its security properties using a tool named ProVerif, and analyze it in terms of the computational and communication cost. The analysis result shows that the proposed scheme is suitable for resource-constrained ubiquitous environments.
- Published
- 2018
215. An OWL-based semantic business process monitoring framework
- Author
-
Sunjae Lee, Kwangsoo Kim, Jae Yeol Lee, and Dongwoo Kang
- Subjects
Process management ,Process modeling ,Computer science ,business.industry ,Artifact-centric business process model ,Business process ,General Engineering ,Process mining ,Business process modeling ,Computer Science Applications ,Business process management ,Business process discovery ,Business Process Model and Notation ,Artificial Intelligence ,Ontology ,Business activity monitoring ,Software engineering ,business - Abstract
Process monitoring phase is one of the service-oriented business process (SOBP) lifecycle phases. Traditional process monitoring approaches have been only achieved at the syntactic level of the process monitoring contexts, which causes the communication problems such as ambiguous understandings and divergent interpretations. To solve the problems, the process monitoring should be achieved at the semantic level as well as at the syntax level of the process monitoring context. In order to support semantic monitoring operations, an ontology-based monitoring framework for the SOBP execution is suggested in this paper. The suggested framework combines a BPEL4WS process model with the semantic monitoring context which is expressed with OWL.
- Published
- 2009
216. A novel stylus profiler without nonlinearity and parasitic motion for FPD inspection system
- Author
-
Dongwoo Kang, Moon G. Lee, Minsung Hong, June Ho Park, Hyun Soo Jung, and Soo Hun Lee
- Subjects
Engineering ,Liquid-crystal display ,Current-feedback operational amplifier ,business.industry ,Mechanical Engineering ,Linear variable differential transformer ,Voice coil ,Servomotor ,Flat panel display ,law.invention ,Display device ,Optics ,Mechanics of Materials ,law ,business ,Stylus - Abstract
There are increasing needs to inspect micro-pattern of flat panel display (FPD) device such as PDP and LCD. The inspection system should be able to measure over large size mother glass with high productivity and accuracy. Stylus profilers are adopted as an inspection system. To scan over large and heavy FPD device specimen, a “tip-scanning” head for stylus profiler is required. A simple method to realize a tip-scanning system is to miniaturize the whole scanning unit. In this study, a novel stylus profiler is proposed as a tip-scanning stylus profiler. The novel stylus profiler has leaf spring instead of conventional lever and pivot. To measure position of stylus an optical senor is used. Linear variable differential transformer is applied to feed-back scanning stage displacement. The stage is actuated by a voice coil motor (VCM). Target performances of the stylus profiler head are in the stroke over 20 μm with high accuracy. Specifications of xy-scanning stage are over 250 μm×250 μm and high bandwidth over 20 Hz. The magnetic and elastic characteristics of the mechanism are designed based on finite element (FE) analysis. After fabrication of the head and stage, they are integrated. Current amplifier and feedback controller are also developed. The performance of the stylus profiler is also validated by inspecting standard sample.
- Published
- 2007
217. A framework for supporting bottom-up ontology evolution for discovery and description of Grid services
- Author
-
Kwangsoo Kim, Sunjae Lee, Jae Yeol Lee, Wonchul Seo, and Dongwoo Kang
- Subjects
Ontology Inference Layer ,Conceptualization ,Computer science ,computer.internet_protocol ,Process ontology ,Ontology-based data integration ,Interoperability ,General Engineering ,Suggested Upper Merged Ontology ,Ontology language ,Ontology (information science) ,OWL-S ,Computer Science Applications ,Open Biomedical Ontologies ,World Wide Web ,Artificial Intelligence ,Ontology components ,Ontology ,Upper ontology ,ComputingMethodologies_GENERAL ,Ontology alignment ,computer - Abstract
The problem of service sharing in a Grid environment arises from the heterogeneity of ontologies. The discovery and description of Grid services based on the heterogeneous ontologies might give rise to misunderstanding of the contents. In order to reduce and ultimately to remove the misunderstanding, a domain-specific ontology should be shared among concerned parties, and the abilities of Grid services should be discovered and described based on that shared ontology. However, since the ontology evolves as time goes by, the shared ontology for the Grid services should have a flexible infrastructure that has an ability to reflect the changes in ontologies. This paper proposes a flexible ontology management approach for discovery and description of Grid service capabilities supporting ontology evolution whose goal is to enhance the interoperability among Grid services. In this approach, concepts and descriptions in an ontology are defined independently, and they are connected by relationships. In addition, the relationships are updated based on real-time evaluations of ontology users in order to flexibly support ontology evolution. A bottom-up ontology evolution means such environment that allows ontology users to evaluate impact factors of concepts in an ontology and that results of the evaluation are reflected to the modification of the ontology. The contribution of this paper is to suggest the ontology management framework that not only enables semantic discovery and description of a Grid service capability but also supports a bottom-up ontology evolution based on the users’ evaluations.
- Published
- 2007
218. Reversible Jump Markov Chain Monte Carlo for Deconvolution
- Author
-
Davide Verotta and Dongwoo Kang
- Subjects
Mathematical optimization ,Posterior probability ,Models, Biological ,LTI system theory ,symbols.namesake ,Cocaine ,Humans ,Computer Simulation ,Pharmacokinetics ,Mathematics ,Pharmacology ,Drug Administration Routes ,Uncertainty ,Reproducibility of Results ,Bayes Theorem ,Markov chain Monte Carlo ,Reversible-jump Markov chain Monte Carlo ,Inverse problem ,Markov Chains ,Spline (mathematics) ,Linear Models ,symbols ,Piecewise ,Deconvolution ,Sulpiride ,Monte Carlo Method ,Algorithm ,Algorithms - Abstract
To solve the problem of estimating an unknown input function to a linear time invariant system we propose an adaptive non-parametric method based on reversible jump Markov chain Monte Carlo (RJMCMC). We use piecewise polynomial functions (splines) to represent the input function. The RJMCMC algorithm allows the exploration of a large space of competing models, in our case the collection of splines corresponding to alternative positions of breakpoints, and it is based on the specification of transition probabilities between the models. RJMCMC determines: the number and the position of the breakpoints, and the coefficients determining the shape of the spline, as well as the corresponding posterior distribution of breakpoints, number of breakpoints, coefficients and arbitrary statistics of interest associated with the estimation problem. Simulation studies show that the RJMCMC method can obtain accurate reconstructions of complex input functions, and obtains better results compared with standard non-parametric deconvolution methods. Applications to real data are also reported.
- Published
- 2007
219. Software similarity analysis based on dynamic stack usage patterns
- Author
-
Daeyeon Son, Dongwoo Kang, Gwangil Jeon, Jeonghyeok Park, and Jongmoo Choi
- Subjects
Source code ,business.industry ,Computer science ,media_common.quotation_subject ,Local variable ,computer.software_genre ,Software ,Similarity (network science) ,Code refactoring ,Malware ,Plagiarism detection ,Data mining ,business ,computer ,media_common ,TRACE (psycholinguistics) - Abstract
Analysing software similarity is actively utilized for various purposes such as plagiarism detection, malware classification and software refactoring. In this study, we observe that the runtime behavior of stack, which manipulates arguments and local variables during function calls, is one of the viable indicators to identify similar software and to detect plagiarized programs. This observation drives us to design a novel software similarity analysis scheme that has the following two strong points. First, it examines the stack usage patterns collected during the execution of binary codes, without requiring source codes. Therefore, it can be used for the software theft trial case where involved companies or individuals are not willing to uncover their source codes. Second, the stack usage patterns are invulnerable to syntactic alterations such as renaming, statements reordering and control flow restructuring. The proposed scheme provides four functionalities; trace collection, dynamic call sequence graph generation, stack usage pattern refinement, and similarity comparison. Real implementation based experimental results show that our proposal identifies similar software appropriately, reporting similarity scores comparable to MOSS (Measure Of Software Similarity). In addition, it has a capability to discover whether binaries use the same core logics.
- Published
- 2015
220. Amnesic cache management for non-volatile memory
- Author
-
Jongmoo Choi, Sam H. Noh, Dongwoo Kang, Seungjae Baek, Onur Mutlu, and Donghee Lee
- Subjects
Smart Cache ,Hardware_MEMORYSTRUCTURES ,Computer science ,Cache coloring ,Cache invalidation ,CPU cache ,Page cache ,Cache ,Parallel computing ,Cache pollution ,Cache algorithms - Abstract
One characteristic of non-volatile memory (NVM) is that, even though it supports non-volatility, its retention capability is limited. To handle this issue, previous studies have focused on refreshing or advanced error correction code (ECC). In this paper, we take a different approach that makes use of the limited retention capability to our advantage. Specifically, we employ NVM as a file cache and devise a new scheme called amnesic cache management (ACM). The scheme is motivated by our observation that most data in a cache are evicted within a short time period after they have been entered into the cache, implying that they can be written with the relaxed retention capability. This retention relaxation can enhance the overall cache performance in terms of latency and energy since the data retention capability is proportional to the write latency. In addition, to prevent the retention relaxation from degrading the hit ratio, we estimate the future reference intervals based on the inter-reference gap (IRG) model and manage data adaptively. Experimental results with real-world workloads show that our scheme can reduce write latency by up to 40% (30% on average) and save energy consumption by up to 49% (37% on average) compared with the conventional LRU based cache management scheme.
- Published
- 2015
221. Cryptanalysis of User Authentication Scheme Preserving Anonymity for Ubiquitous Devices
- Author
-
Dongho Won, Dongwoo Kang, Dong-Hoon Lee, and Jongho Mun
- Subjects
Authentication ,Network security ,business.industry ,Computer science ,Data_MISCELLANEOUS ,Mutual authentication ,Computer security ,computer.software_genre ,law.invention ,law ,Cellular network ,Smart card ,business ,Cryptanalysis ,Replay attack ,computer ,Anonymity - Abstract
As the mobile network such as using cell phone, tablet PC, notebook services are gradually increased, a smart card comes to one of the useful thing, because of its convenience and portable. Contemporary, smart card-based authentication also can be one of the most generally authentication method. In 2015, Djellali et al. proposed user authentication scheme with preserving user anonymity and mutual authentication. Also, it provides light and profitable mechanism which can be easily applied to limited power or resources. They claimed their scheme is resisted many networks threat. Unfortunately, we discover some vulnerable weakness. In this paper, we demonstrate that their scheme is still unstable to some network threats, such as insider attack, offline-password guessing attack, impersonation attack and replay attack.
- Published
- 2015
222. Process of Big Data Analysis Adoption: Defining Big Data as a New IS Innovation and Examining Factors Affecting the Process
- Author
-
Soung Hie Kim and Dongwoo Kang
- Subjects
Knowledge management ,business.industry ,Big data ,Perception model ,business ,Expansive ,Pre and post ,Competence (human resources) ,Structural equation modeling ,Data modeling ,Organizational level - Abstract
This paper defines big data analysis a type3 innovation and extends our previous studies on the adoption/assimilation of innovation technologies. The paper develops a three-stage adoption integrative model based on the past diffusion context literatures. The model utilizes TOE (Technology-Organization-Environment) framework as antecedents of this adoption process. Based on the perception model, we hypothesize how perceived direct/indirect benefit, financial readiness, IS competence, industrial pressure affects big data analysis adoption at the organizational level. These five factors are tested using SEM (Structural Equation Modeling) and our analysis leads to following key findings. (1) Financial readiness, IS competence, and industrial pressure are found to affect adoption stages significantly but we could not find such relationship between perceived direct/indirect benefit and the following stages. (2) IS competition had expansive influence on the overall adoption process. (3) Adoption stage is influenced by external factors, which is industrial pressure for our case. (4) Pre and post stages of adoption are affected by internal resources of organization rather than environments.
- Published
- 2015
223. An enterprise architecture framework for collaboration of virtual enterprise chains
- Author
-
Heekwon Chae, Dongwoo Kang, Kwangsoo Kim, and Young-Hwan Choi
- Subjects
Enterprise architecture framework ,Knowledge management ,Process management ,Architecture domain ,Business process ,Computer science ,media_common.quotation_subject ,Supply chain ,Enterprise integration ,Enterprise architecture ,Integrated enterprise modeling ,Industrial and Manufacturing Engineering ,Adaptability ,Functional software architecture ,Enterprise system ,Enterprise architecture management ,Enterprise life cycle ,Business architecture ,Enterprise application integration ,Enterprise information system ,View model ,NIST Enterprise Architecture Model ,Enterprise planning system ,media_common ,Enterprise systems engineering ,business.industry ,Mechanical Engineering ,Enterprise information security architecture ,Enterprise modelling ,Computer Science Applications ,Purdue Enterprise Reference Architecture ,Control and Systems Engineering ,business ,Software ,Enterprise software - Abstract
The competitive power in the dynamically changing business environment has moved from individual enterprises to virtual enterprise chains. The collaboration of enterprises in a virtual enterprise chain requires that all of the enterprise elements should interact efficiently with each other based upon a virtual framework. However, any efficient framework for the collaboration is not introduced yet due to the incompatibility problem which stems from the inherent complexity of diverse enterprise elements. This paper introduces a new framework, the Virtual Enterprise Chain Collaboration Framework (VECCF), to guide enterprises to promote their collaboration efforts by addressing the incompatibility problem of enterprise elements. VECCF focuses on three aspects of enterprise elements which are capturing the different views of the collaboration: business processes, application software, and enabling technologies. In addition, the structure of VECCF conforms to OMG’s metamodel architecture so that the enterprise models can be extended to specific value chains when it is necessary. The enterprise models of a design chain and a supply chain are applied to VECCF and then integrated into a merged value chain to verify the adaptability and the extensibility of the proposed framework.
- Published
- 2006
224. ORIGINAL RESEARCH—WOMEN's SEXUAL DYSFUNCTION: Impact of Oral Contraceptives on Sex Hormone‐Binding Globulin and Androgen Levels: A Retrospective Study in Women with Sexual Dysfunction
- Author
-
Gemma Fantini, Dongwoo Kang, Sarah Wise, Irwin Goldstein, André T. Guay, Claudia Panzer, and Ricardo Munarriz
- Subjects
Gynecology ,medicine.medical_specialty ,biology ,medicine.drug_class ,business.industry ,Urology ,Endocrinology, Diabetes and Metabolism ,Female sexual dysfunction ,Physiology ,medicine.disease ,Androgen ,Discontinuation ,Psychiatry and Mental health ,Endocrinology ,Sexual dysfunction ,Sex hormone-binding globulin ,Reproductive Medicine ,medicine ,biology.protein ,medicine.symptom ,Androgen insufficiency ,business ,Testosterone ,Reproductive health - Abstract
Introduction Oral contraceptives (OCs) have been the preferred method of birth control because of their high rate of effectiveness. OC use, however, has been associated with women's sexual health complaints and androgen insufficiency. OC use is associated with a decrease of androgen ovarian synthesis and an increase in the production of sex hormone‐binding globulin (SHBG). There have been limited studies assessing SHBG values after discontinuation of OC use. Aim To retrospectively investigate SHBG levels before and after discontinuation of OC use. Main Outcome Measure Sex hormone‐binding globulin values were compared at baseline, while on the OC, and well beyond the 7‐day half‐life of SHBG at 49–120 (mean 80) days and >120 (mean 196) days after discontinuation of OCs. Methods A total of 124 premenopausal women with sexual health complaints for >6 months met inclusion/exclusion criteria. Three groups of women were defined: (i) “Continued‐Users” (N = 62; mean age 32 years) had been on OCs for >6 months and continued taking them; (ii) “Discontinued‐Users” (N = 39; mean age 33 years) had been on OCs for >6 months and discontinued them; and (iii) “Never‐Users” (N = 23; mean age 36 years) had never taken OCs. Results Sex hormone‐binding globulin values in the “Continued‐Users” were four times higher than those in the “Never‐User” group (mean 157 ± 13 nmol/L vs. 41 ± 4 nmol/L; P 120 days). Conclusion In women with sexual dysfunction, SHBG changes in “Discontinued‐Users” did not decrease to values consistent with “Never‐Users.” Long‐term sexual, metabolic, and mental health consequences might result as a consequence of chronic SHBG elevation. Does prolonged exposure to the synthetic estrogens of OCs induce gene imprinting and increased gene expression of SHBG in the liver in some women? Prospective research is needed. Panzer C, Wise S, Fantini G, Kang D, Munarriz R, Guay A, and Goldstein I. Impact of oral contraceptives on sex hormone‐binding globulin and androgen levels: a retrospective study in women with sexual dysfunction. J Sex Med 2006;3:104–113.
- Published
- 2006
225. Population analyses of amlodipine in patients living in the community and patients living in nursing homes
- Author
-
Dongwoo Kang, Janice B. Schwartz, and Davide Verotta
- Subjects
Male ,Aging ,medicine.medical_specialty ,Alcohol Drinking ,Population ,Pharmacokinetics ,Internal medicine ,Ethnicity ,medicine ,Humans ,Pharmacology (medical) ,In patient ,Amlodipine ,education ,Aged ,Aged, 80 and over ,Pharmacology ,Sex Characteristics ,education.field_of_study ,Models, Statistical ,business.industry ,Smoking ,Calcium Channel Blockers ,Nursing Homes ,NONMEM ,Surgery ,Concomitant ,Lean body mass ,Female ,Nursing homes ,business ,Algorithms ,medicine.drug - Abstract
Objective Our objective was to determine the effects of age, sex, and morbidity on the apparent oral clearance (CL/F) of amlodipine. Methods Population pharmacokinetic analyses were performed on data from 211 patients receiving oral racemic amlodipine (dose of 7.2±3.6 mg/d [mean±SD]) on a long-term basis. Of the patients, 105 were men, with a mean age of 72±13 years and lean body weight (LBW) of 60.7±7.6 kg, and 106 were women, with a mean age of 79±11 years and LBW of 44.2±6.0 kg; 119 lived in the community, 20 in assisted living facilities, and 72 in nursing homes. Amlodipine was measured by liquid chromatography-tandem mass spectrometry. Population analyses were performed by use of NONMEM with sex, age, race, living site, alcohol intake, and concomitant medications considered as covariates. The significance of covariates was determined by likelihood ratio tests. Results Female sex and living in a nursing home were associated with a faster CL/F compared with men and community-dwelling patients, respectively. The mean CL/F was 7.83±0.50 mL·min−1·kg−1 (LBW) in women compared with 6.31±1.01 in men and 8.68±1.00 mL·min−1·kg−1 in nursing home residents compared with 6.32±1.17 in community-dwelling patients. Increasing age was associated with decreasing CL/F only in community-dwelling patients and residents of assisted living facilities. Conclusions In middle-aged and very old (>80 years) patients, amlodipine CL/F was faster in women compared with men and was faster in nursing home residents compared with community-dwelling patients, with increasing age decreasing CL/F only in community-dwelling patients and residents of assisted-living facilities. Clinical Pharmacology & Therapeutics (2006) 79, 114–124; doi: 10.1016/j.clpt.2005.09.007
- Published
- 2006
226. A sample size computation method for non-linear mixed effects models with applications to pharmacokinetics models
- Author
-
Janice B. Schwartz, Dongwoo Kang, and Davide Verotta
- Subjects
Statistics and Probability ,Mixed model ,Mathematical optimization ,Nifedipine ,Epidemiology ,Population ,Black People ,Wald test ,Models, Biological ,White People ,Sampling design ,Test statistic ,Humans ,Applied mathematics ,Computer Simulation ,Pharmacokinetics ,education ,Mathematics ,education.field_of_study ,Models, Statistical ,Sampling (statistics) ,Calcium Channel Blockers ,Random effects model ,Sample size determination ,Sample Size ,Monte Carlo Method - Abstract
We propose a simple method to compute sample size for an arbitrary test hypothesis in population pharmacokinetics (PK) studies analysed with non-linear mixed effects models. Sample size procedures exist for linear mixed effects model, and have been recently extended by Rochon using the generalized estimating equation of Liang and Zeger. Thus, full model based inference in sample size computation has been possible. The method we propose extends the approach using a first-order linearization of the non-linear mixed effects model and use of the Wald chi(2) test statistic. The proposed method is general. It allows an arbitrary non-linear model as well as arbitrary distribution of random effects characterizing both inter- and intra-individual variability of the mixed effects model. To illustrate possible uses of the method we present tables of minimum sample sizes, in particular, with an illustration of the effect of sampling design on sample size. We demonstrate how (D-)optimal or frequent sampling requires fewer subjects in comparison to a sparse sampling design. We also present results from Monte Carlo simulations showing that the computed sample size can produce the desired power. The proposed method greatly reduces computing times compared with simulation-based methods of estimating sample sizes for population PK studies.
- Published
- 2004
227. Population analyses of sustained-release verapamil in patients: Effects of sex, race, and smoking
- Author
-
Mary Ellen Krecic‐Shepard, Nishit B. Modi, Dongwoo Kang, Suneel K. Gupta, Janice B. Schwartz, and Davide Verotta
- Subjects
Adult ,Male ,medicine.medical_specialty ,Vasodilator Agents ,medicine.medical_treatment ,Population ,Racemases and Epimerases ,Administration, Oral ,Coronary Disease ,Gastroenterology ,law.invention ,Coronary artery disease ,Sex Factors ,Pharmacokinetics ,law ,Internal medicine ,Tachycardia, Supraventricular ,Humans ,Medicine ,Pharmacology (medical) ,In patient ,education ,Antihypertensive Agents ,Aged ,Pharmacology ,Likelihood Functions ,Chemotherapy ,education.field_of_study ,Clinical pharmacology ,business.industry ,Racial Groups ,Smoking ,Middle Aged ,medicine.disease ,NONMEM ,Verapamil ,Delayed-Action Preparations ,Anesthesia ,Hypertension ,Female ,business ,Anti-Arrhythmia Agents ,medicine.drug - Abstract
Objective Our objective was to determine the effects of age, sex, and sustained-release formulation on apparent oral clearance of sustained-release racemic verapamil in patient populations. Methods Population pharmacokinetic analyses were performed on data from 186 patients with hypertension, coronary artery disease, or supraventricular arrhythmias who were receiving long-term sustained-release oral racemic verapamil (Covera SR in 105 patients, Calan SR in 67 patients, and other formulations in 14 patients; mean ± SD dose, 280 ± 139 mg) for clinical care or as a part of phase III efficacy studies. Of those 186 patients, 135 were men (age, 63 ± 12 years; ideal body weight, 70.7 ± 6.6 kg) and 51 were women (age, 60 ± 17 years; ideal body weight, 53.7 ± 7.2 kg) . Verapamil was measured by HPLC, and population analyses were performed by use of NONMEM software. Sex, age, and formulation were the covariates considered in the population model building. Subgroup analyses of race, smoking, and alcohol consumption were also performed. Significance of covariates was determined by likelihood ratio tests. Results Sex significantly affected steady-state clearance of oral sustained-release racemic verapamil. Apparent oral clearance of sustained-release verapamil was 23.8 ± 2.3 mL/min per kilogram in women compared with 18.6 ± 3.4 mL/min per kilogram in men. Clearance estimates were faster in black subjects compared with white subjects, as well as in smokers compared with nonsmokers. Effects of age, formulation, and alcohol consumption were not detected. Conclusions In middle-aged and older patients, apparent oral clearance of sustained-release racemic verapamil was affected by sex (faster in women compared with men), race (faster in black subjects compared with white subjects), and smoking (faster in smokers compared with nonsmokers) but not by age, alcohol, or formulation. Clinical Pharmacology & Therapeutics (2003) 73, 31–40; doi: 10.1067/mcp.2003.21
- Published
- 2003
228. Effects of dry oxidation treatments on monolayer graphene
- Author
-
Gyeong Hee Ryu, Hye Jin Jo, Dongwoo Kang, Jongyeong Lee, Hyeon Suk Shin, and Zonghoon Lee
- Subjects
Materials science ,Graphene ,Mechanical Engineering ,Electron energy loss spectroscopy ,Analytical chemistry ,Lattice distortion ,02 engineering and technology ,General Chemistry ,010402 general chemistry ,021001 nanoscience & nanotechnology ,Condensed Matter Physics ,Photochemistry ,01 natural sciences ,Monolayer graphene ,Atomic units ,Chemical reaction ,0104 chemical sciences ,law.invention ,Mechanics of Materials ,Transmission electron microscopy ,law ,General Materials Science ,0210 nano-technology ,Surface states - Abstract
Ultraviolet–ozone (UVO) and oxygen plasma are widely used to modify the surface of materials because these processes are facile and accessible. These dry oxidation treatments are also commonly applied to 2D graphene and are presumed to induce similar oxidation effects on the graphene surface. However, in this work, these treatments are revealed to induce the formation of different types of defects on the surface of graphene because the UVO treatment causes a chemical reaction, whereas the oxygen plasma treatment causes both physical and chemical reactions. The oxygen plasma treatment results mainly in topological defects, which effectively induce the attachment of oxygen atoms onto the treated surface; by contrast, the UVO treatment induces only the attachment of oxygen atoms onto the treated surface, without inducing lattice distortion. These results are confirmed mainly by atomic-resolution transmission electron microscopy imaging and electron energy loss spectroscopy. Using such facile dry oxidation treatments, we experimentally modified the surface states of graphene at the atomic scale.
- Published
- 2017
229. Mechanical Properties of Poly(dopamine)-Coated Graphene Oxide and Poly(vinyl alcohol) Composite Fibers Coated with Reduced Graphene Oxide and Their Use for Piezoresistive Sensing
- Author
-
Young-Eun Shin, Hyunhyub Ko, Hye Jin Jo, Hyeon Suk Shin, and Dongwoo Kang
- Subjects
Vinyl alcohol ,Nanocomposite ,Materials science ,Graphene ,Composite number ,Oxide ,02 engineering and technology ,General Chemistry ,010402 general chemistry ,021001 nanoscience & nanotechnology ,Condensed Matter Physics ,01 natural sciences ,0104 chemical sciences ,law.invention ,chemistry.chemical_compound ,chemistry ,Gauge factor ,law ,Ultimate tensile strength ,General Materials Science ,Fiber ,Composite material ,0210 nano-technology - Abstract
High-strength poly(vinyl alcohol) (PVA) composite fibers are successfully fabricated through gel spinning, and reinforced by poly(dopamine)-coated graphene oxide (dGO) and exterior reduced graphene oxide (rGO) coating. The mechanical properties of PVA/dGO composite fibers show a dependence on the sheet size of GO and interfacial adhesion force is formed by poly(dopamine) layers. The ultimate tensile strength and Young's modulus of PVA/dGO fibers are 1.58 and 27.2 GPa, and 68.1% and 97.1% higher than neat PVA fiber. In addition, there is an 8.2% and 21.4% increase relative to that of PVA/GO composite fiber. Moreover, exterior rGO layers are shown to reinforce the tensile strength of PVA/dGO composite fibers, and the tensile strength of rGO-coated PVA/dGO composite fibers is 1.86 GPa. An adhesion force of poly(dopamine) between GO and the PVA matrix can efficiently transfer the tensile load via strong hydrogen bonding at interface, and exterior rGO layers can offer additional tensile strength through interfacial shear strength between rGO sheets. Additionally, a piezoresistive sensing test of rGO-coated PVA/dGO fiber is shown that a gauge factor of 2.3 under 1% strain is achieved, leading to the potential use of this material in wearable strain gauges.
- Published
- 2017
230. D 3
- Author
-
Myunghoon Oh, Dongwoo Kang, Seungjae Baek, and Jongmoo Choi
- Subjects
business.industry ,Computer science ,Quality of service ,Process (computing) ,Virtualization ,computer.software_genre ,Metadata ,Software ,Virtual machine ,business ,computer ,Computer network ,Data transmission ,Live migration - Abstract
Virtualization, one of the most actively adopted technologies today in computer systems, is increasingly widening its range of applications. As multiple virtual machines are concurrently executed on a physical machine, they compete for physical resources such as CPU, DRAM, HDD and NIC. In large scale data centers or cluster systems, therefore, it is required to balance each physical machine's load by migrating a virtual machine to another physical machine. Since system performance is significantly hindered during migration, a technique that can reduce the migration overheads is required. Interestingly, we notice that virtual machines' memory contain numerous dispensable data. Therefore, in this paper, we propose an efficient virtual machine migration technique called D3 (Discarding Dispensable Data). The basic idea is simple but effective: finding indispensable page frames, and then, transferring only those page frames. In D3, simple metadata is handed over for the dispensable page frames, instead of actual data transmission, and the discarded page frames are reconstructed appropriately after the migration process. One issue is that checking whether a page frame is dispensable or not based on comparing the entire content yields excessive overhead. To overcome this problem, we devise a novel sampling based dispensable page frame detection technique. D3 have implemented in Xen virtualization software and evaluated on a realistic environment with relevant workloads. We show that D3 can efficiently reduce migration time up to 64.2% and size of memory required for completing a migration up to 94.8%. We also present that D3 is very useful to provide a certain level of QoS and better performance.
- Published
- 2014
231. Design space exploration of an NVM-based memory hierarchy
- Author
-
Daeyeon Son, Sangyeun Cho, Dongwoo Kang, Jongmoo Choi, and Seungjae Baek
- Subjects
CPU cache ,Cache coloring ,Computer science ,Pipeline burst cache ,Parallel computing ,Cache-oblivious algorithm ,Cache pollution ,Write buffer ,Non-uniform memory access ,Write-once ,Cache invalidation ,Cache algorithms ,Snoopy cache ,Random access memory ,Hardware_MEMORYSTRUCTURES ,Memory hierarchy ,MESI protocol ,Cache-only memory architecture ,Uniform memory access ,Semiconductor memory ,MESIF protocol ,Non-volatile memory ,Smart Cache ,Bus sniffing ,Page cache ,Cache - Abstract
Non-volatile memory (NVM) technologies support both byte addressability (like DRAM) and non-volatility (like disks). This characteristic makes it feasible for NVM to be employed at any layer of the memory hierarchy including CPU cache, main memory, file cache, storage, and hybrid memory. In this paper, we explore new challenges and opportunities that arise when NVM is introduced as a file cache in the memory hierarchy. One opportunity is that cache does not require long-term non-volatility since data are replaced when working sets are changed. This feature is well matched with NVM, which has limited retention time. In addition, the retention time of NVM is inverse proportional to the write latency, giving a chance to optimize the write performance. However, the limited retention time raises a new challenge that it may cause the hit ratio reduction and lead to cache performance degradation. To tackle this challenge, we propose a new inter-reference gap (IRG) based cache management scheme that writes data with different retention times according to their IRGs. Our proposal builds on the fact that block accesses of typical workloads show unique and regular patterns in terms of access intervals. Experimental results show that our scheme enhances system performance by up to 42% (33% on average), compared with the conventional LRU based cache management scheme.
- Published
- 2014
232. Structured learning algorithm for detection of nonobstructive and obstructive coronary plaque lesions from computed tomography angiography
- Author
-
Ryo Nakazato, Hyunsuk Ko, Debiao Li, Piotr J. Slomka, C.-C. Jay Kuo, Damini Dey, Daniel S. Berman, Reza Arsanjani, and Dongwoo Kang
- Subjects
medicine.medical_specialty ,Receiver operating characteristic ,medicine.diagnostic_test ,business.industry ,Image Processing ,Feature extraction ,Image segmentation ,medicine.disease ,Support vector machine ,Stenosis ,Angiography ,medicine ,Radiology, Nuclear Medicine and imaging ,Radiology ,Structured prediction ,business ,Algorithm ,Computed tomography angiography - Abstract
Visual identification of coronary arterial lesion from three-dimensional coronary computed tomography angiography (CTA) remains challenging. We aimed to develop a robust automated algorithm for computer detection of coronary artery lesions by machine learning techniques. A structured learning technique is proposed to detect all coronary arterial lesions with stenosis [Formula: see text]. Our algorithm consists of two stages: (1) two independent base decisions indicating the existence of lesions in each arterial segment and (b) the final decision made by combining the base decisions. One of the base decisions is the support vector machine (SVM) based learning algorithm, which divides each artery into small volume patches and integrates several quantitative geometric and shape features for arterial lesions in each small volume patch by SVM algorithm. The other base decision is the formula-based analytic method. The final decision in the first stage applies SVM-based decision fusion to combine the two base decisions in the second stage. The proposed algorithm was applied to 42 CTA patient datasets, acquired with dual-source CT, where 21 datasets had 45 lesions with stenosis [Formula: see text]. Visual identification of lesions with stenosis [Formula: see text] by three expert readers, using consensus reading, was considered as a reference standard. Our method performed with high sensitivity (93%), specificity (95%), and accuracy (94%), with receiver operator characteristic area under the curve of 0.94. The proposed algorithm shows promising results in the automated detection of obstructive and nonobstructive lesions from CTA.
- Published
- 2014
233. Cooperative kernel
- Author
-
Dongwoo Kang, Heekwon Park, and Jongmoo Choi
- Subjects
Flat memory model ,Page fault ,Computer science ,Registered memory ,Overlay ,computer.software_genre ,Execution time ,Configfs ,Interleaved memory ,Computing with Memory ,Memory refresh ,Conventional memory ,Distributed shared memory ,business.industry ,Cache-only memory architecture ,Uniform memory access ,Memory map ,Extended memory ,Memory management ,Physical address ,Shared memory ,Kernel preemption ,Embedded system ,Microsoft Windows ,Operating system ,business ,computer ,Context switch - Abstract
In this paper, we propose a novel memory test platform that is based on the commodity operating systems such as Linux and MS Windows. The proposed memory test platform namely coKernel(cooperative Kernel) provides a facility to examine the entire physical memory, even the region occupied by the test program and operation system. To accomplish this facility, we devise several new techniques including multiple kernel instances, memory isolation, interkernel context switch and kernel hibernation. The multiple kernel instances and memory isolation make it possible to access the whole physical memory cells, while the inter-kernel context switch and kernel hibernation enables to modify memory contents as the way whatever a test program desire. Real implementation based experimental results have shown that the platform supports full memory test coverage with a reasonable overhead. For instance, the inter-kernel context switch and kernel hibernation takes less than 7 seconds on a system equipped with 32GB DRAM, which is quite small compared with the execution time of a test program.
- Published
- 2014
234. Population pharmacokinetic and pharmacodynamic analysis of tremelimumab in patients with metastatic melanoma
- Author
-
Erjian, Wang, Dongwoo, Kang, Kyun-Seop, Bae, Margaret A, Marshall, Dmitri, Pavlov, and Kourosh, Parivar
- Subjects
Adult ,Aged, 80 and over ,Male ,Skin Neoplasms ,Adolescent ,Antibodies, Monoclonal ,Antineoplastic Agents ,Middle Aged ,Antibodies, Monoclonal, Humanized ,Prognosis ,Models, Biological ,Survival Rate ,Young Adult ,Sex Factors ,Humans ,Female ,Tissue Distribution ,Neoplasm Metastasis ,Melanoma ,Aged ,Proportional Hazards Models - Abstract
Tremelimumab, a fully human monoclonal antibody specific for human cytotoxic T-lymphocyte-associated antigen 4, has been studied in clinical trials. We have reported the results of population pharmacokinetics for tremelimumab in 654 metastatic melanoma patients. Population estimates (inter-individual variability [IIV]) for pharmacokinetic parameters in a final model were clearance (CL), 0.26 L/day (31.8%) and central volume of distribution, 3.97 L (20.4%). CL was faster in males, patients with higher values of creatinine clearance and endogenous immunoglobulin, and patients with relatively poor baseline prognostic factors. No dose adjustment was needed based on the magnitude of the change of CL (30%). The association of CL and overall survival (OS) was investigated. In a Phase 3 trial evaluating tremelimumab as first-line-treatment, median OS for the 147 patients in the fast-CL group (≥ median CL value) was 9.6 months versus 15.8 months for the 146 patients in the slow-CL group (median CL value). Multiple Cox proportional hazard regression models were constructed to evaluate the association between CL and OS adjusting for the effects of baseline prognostic covariates between two CL groups. The analysis showed statistically significant association between CL and OS (P .05). Results suggest that higher or more frequent dosing should be considered in future trials.
- Published
- 2014
235. Burstiness-aware I/O scheduler for MapReduce framework on virtualized environments
- Author
-
Dongwoo Kang, Junmo Kim, Sewoog Kim, and Jongmoo Choi
- Subjects
business.industry ,Computer science ,Distributed computing ,Cloud computing ,computer.software_genre ,Virtualization ,Bottleneck ,Scheduling (computing) ,Software ,Virtual machine ,Burstiness ,Operating system ,business ,computer ,Context switch - Abstract
Recently, virtualized environments such as cloud computing and a virtual cluster are used popularly by lots of MapReduce applications to reap the benefits of low cost and flexibility. However, the I/O bottleneck of the virtualization software gives a burden especially for processing big data. To relieve the burden, we propose a novel burstiness-aware I/O scheduler. Our analysis has revealed that the I/O bottleneck is caused by I/O interferences among the bursty I/Os triggered by different virtual machines, especially when they execute the map and/or reduce tasks. The I/O interferences result in frequent context switches in the virtualization software and long seek distances in a disk. Our proposed I/O scheduler first detects I/O burstiness of a virtual machine on-line. Then, it schedules bursty virtual machines in a round-robin fashion so that a scheduled virtual machine utilizes most of I/O bandwidth without interferences. Real implementation based experiments have shown that our scheduler can enhance the I/O performance up to 23% with an average of 20%.
- Published
- 2014
236. Laboratory and Clinical Investigations to Identify the Optimal Dosing Strategy for Quizartinib (AC220) Monotherapy in FLT3-Itd-Positive (+) Relapsed/Refractory (R/R) Acute Myeloid Leukemia (AML)
- Author
-
Denise Trone, Mark J. Levis, Guy Gammon, Jorge E. Cortes, Jianke Li, and Dongwoo Kang
- Subjects
business.industry ,Immunology ,QTcF Prolongation ,Cmax ,Cell Biology ,Hematology ,Pharmacology ,medicine.disease ,Biochemistry ,QT interval ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,Pharmacokinetics ,chemistry ,030220 oncology & carcinogenesis ,Medicine ,Potency ,business ,IC50 ,Febrile neutropenia ,030215 immunology ,Quizartinib - Abstract
Internal tandem duplications (ITD) are the most commonly observed FLT3 mutations in AML and are associated with decreased survival. In order to achieve meaningful clinical responses with FLT3 inhibitors, sustained inhibition of FLT3 must be achieved [Pratz, Blood 2009]. Quizartinib (Q) is an oral, potent and selective FLT3 inhibitor with an IC50 ≤ 1 nM. In a Phase (Ph) 1 study, the maximum tolerated dose was 200 mg/day (d) in patients (pt) with R/R AML. QTc prolongation was the dose limiting toxicity (Cortes, JCO 2013). In Ph 2 trials, QTc prolongation >500 msec was observed in 15%-17% of pt treated with Q doses of 90 mg/d and 135 mg/d (Cortes ASCO 2013, Martinelli, ASCO 2013). Composite complete response (CRc) rates and overall survival (OS) in FLT3-ITD(+) pt in these studies were similar, respectively. (Levis EHA 2013)] Therefore, we conducted a pre-clinical, pharmacokinetic (PK), and clinical study to find an optimal dosing strategy balancing efficacy with safety for use in future trials. FLT3-inhibitory activity of Q and its major active metabolite with comparable potency, AC886, in human plasma was determined in a FLT3-ITD(+) cell line incubated in human plasma. Results indicated the minimum plasma concentration to fully inhibit FLT3-ITD in peripheral blood (PB) blasts was 200 nM. Less potent FLT3 inhibitors are able to clear blasts from PB when used at MTD, but often have little effect on bone marrow (BM) blasts. Factors contributing to resistance in the BM microenvironment include local metabolism of drug by stromal cell-cytochrome P450 enzymes (Ghiaur, PNAS 2013) and local production of FLT3 ligand (Sato, Blood 2011). To estimate the concentration of Q necessary to inhibit FLT3-ITD in marrow blasts, we conducted concentration-response studies in blasts in suspension culture (as a parallel to PB blasts) vs co-cultured on BM stromal cells. Approximately 5-fold higher levels of Q were needed to achieve the same IC50 in blasts on BM stroma (Figure 1). These results predict that plasma levels between 500 nM-1000 nM would be required to reach complete inhibition of leukemic growth in BM. The PK of 30 mg and 60 mg doses of Q were evaluated in a Ph 2 study of 76 pt with FLT3-ITD(+) R/R AML. PK dose-proportionality was observed, and steady state (SS) was reached by Day 15. The SS geometric mean trough concentrations for Q+AC886 were 346 nM and 947 nM for 30 mg and 60 mg, respectively. FLT3 inhibition in peripheral blood blasts, as determined both directly and by a plasma inhibitory activity (PIA) assay, was comparable between the 2 doses. With regards to clinical activity, a trend for longer OS and duration of CRc (DOR) (Figure 2) was seen with the higher dose: median OS 20.9 and 25.4 wk and median DOR 4.1 (95% CI 3.9, 9.7) and 20.0 wk (95% CI, 4.3, 20) for 30 mg/d and 60 mg/d, respectively. A similar trend was observed when these were compared for high and low PK exposures. Higher exposure with 60 mg/d did not result in increased CRc rate (47% both doses). (Cortes, ASH 2013) In the Ph 2 study, a linear mixed effect model was applied to Fridericia-corrected QT data (QTcF) and time-matched Q and AC886 plasma concentrations to obtain a slope estimate. The model showed that Q plasma concentration, but not AC886, was the significant predictor for QTc prolongation. The model-predicted upper limit of 90% confidence interval of QTcF change from baseline at geometric mean Cmax of 60 mg (869nM) was 22.6 msec, and for 30 mg (332nM) was 8.64 msec. No apparent dose-response relationship was observed for other safety parameters, including myelosuppression or febrile neutropenia. Thus, neither dose was myelosuppressive, but 60 mg/d was associated with somewhat more QTcF prolongation. This novel laboratory and clinical study has identified 60mg/d of Q as the target dose for clinical efficacy as monotherapy. Although differences in DOR and OS were not statistically significant between 30 mg and 60 mg doses, the sample size was small, and trends for increased benefit were observed at the higher dose. In addition, 60 mg/d provides plasma concentrations (Q+AC886) of 500-1000 nM which are predicted to achieve more complete inhibition of BM blasts. To mitigate the risk of QTc prolongation, a dosing regimen that includes a 30 mg/d lead-in period until SS is reached to allow for QT assessment, followed by escalation to 60 mg/d if tolerated, has been adopted for the Ph 3 study of Q monotherapy in FLT3-ITD-(+) R/R AML (QuANTUM-R, NCT02039726). Figure 1 Figure 1. Figure 2 Figure 2. Disclosures Levis: Millennium: Consultancy, Research Funding; Daiichi-Sankyo: Consultancy, Honoraria; Astellas: Consultancy, Honoraria, Research Funding; Novartis: Consultancy, Honoraria, Research Funding. Cortes:ARIAD: Consultancy, Research Funding; BMS: Consultancy, Research Funding; Novartis: Consultancy, Research Funding; Pfizer: Consultancy, Research Funding; Teva: Research Funding. Gammon:Daichi Sankyo Pharma Development: Employment. Trone:Daichi Sankyo Pharma Development: Employment. Kang:Daichi Sankyo Pharma Development: Employment. Li:Daichi Sankyo Pharma Development: Employment.
- Published
- 2016
237. A Simple Asynchronous UWB Position Location Algorithm Based on Single Round-Trip Transmission
- Author
-
Yang Suckchel, Yoan Shin, Dongwoo Kang, and Young Namgoong
- Subjects
Computer science ,Applied Mathematics ,Real-time computing ,Ultra-wideband ,Multilateration ,Computer Graphics and Computer-Aided Design ,Synchronization ,Beacon ,Time of arrival ,Transmission (telecommunications) ,Asynchronous communication ,Signal Processing ,Electrical and Electronic Engineering ,Algorithm ,Processing delay - Abstract
We propose a simple asynchronous UWB (Ultra Wide Band) position location algorithm with low complexity, power consumption and processing delay. In the proposed algorithm, only a single RTTX (Round-Trip Transmission) of UWB pulses is utilized based on the ToA (Time of Arrival) principle. Hence, the proposed algorithm decreases power consumption and processing delay as compared to the basic ToA based on triple RTTXs. Moreover, unlike the TDoA (Time Difference of Arrival) algorithm, the proposed algorithm can perform the position location with low complexity since it does not require strict synchronization between multiple beacons. Simulation results using IEEE 802.15.4a UWB channel models reveal that the proposed algorithm achieves closely comparable position location performance of the basic ToA and TDoA algorithms.
- Published
- 2008
238. Effect of page frame allocation pattern on bank conflicts in multi-core systems
- Author
-
Heekwon Park, Jongmoo Choi, and Dongwoo Kang
- Subjects
Page frame ,Random allocation ,Sequential allocation ,Multi-core processor ,Memory bank ,Computer science ,business.industry ,Real-time computing ,Frame (networking) ,Isolation (database systems) ,Static memory allocation ,business ,Computer network - Abstract
Multi-core systems equip a huge size of main memory that consists of a large number memory banks. To fully utilize the capability of multi-core, memory banks can be accessed by multiple cores in parallel without bank conflicts that cause significant performance degradation. In this paper, we investigate how the page frame allocation decision of operating systems affects the bank conflicts. When an operation system decides to allocate page frames mapped into the same bank to the different cores, the accesses to these page frames by the cores incur the bank conflicts. In general, the page frame allocation schemes used by operating systems can be classified into two groups according to the allocation patterns, namely sequential and random. Using the simulation-based experiments, we analyze the effects of the two patterns on the bank conflicts under different number of cores and banks. The analysis discloses that the sequential allocation performs worse than the random allocation due to the correlated conflict and bank congestion. We also discuss several issues such as bank isolation and individual frame management that can be used effectively for implementing memory parallelism-aware page frame allocation schemes in operating systems.
- Published
- 2013
239. Two-dimensional hybrid nanosheets of tungsten disulfide and reduced graphene oxide as catalysts for enhanced hydrogen evolution
- Author
-
Dongwoo Kang, Jieun Yang, Seong Joon Ahn, Ahyoung Kim, Manish Chhowalla, Hyeon Suk Shin, and Damien Voiry
- Subjects
Materials science ,Graphene ,Inorganic chemistry ,Tungsten disulfide ,Oxide ,chemistry.chemical_element ,General Chemistry ,General Medicine ,Tungsten ,Electrocatalyst ,Catalysis ,law.invention ,chemistry.chemical_compound ,chemistry ,law ,Hydrothermal synthesis ,Hybrid material ,Graphene oxide paper - Abstract
Composite materials: Tungsten disulfide and WS2 /reduced graphene oxide (WS2 /rGO) nanosheets were fabricated by hydrothermal synthesis using tungsten chloride, thioacetamide, and graphene oxide (GO) as starting materials. The WS2 nanosheets are efficiently templated on the rGO layer. The WS2 /rGO hybrid nanosheets show much better electrocatalytic activity for the hydrogen evolution reaction than WS2 nanosheets alone.
- Published
- 2013
240. Mosaic-like monolayer of graphene oxide sheets decorated with tetrabutylammonium ions
- Author
-
Nami Byun, Rodney S. Ruoff, Sung Guk Lee, Byung Hwa Seo, Tae Hyeong Kim, Dongwoo Kang, Dongwook Lee, Jung Woo Kim, and Hyeon Suk Shin
- Subjects
Materials science ,Graphene ,Tetrabutylammonium hydroxide ,Inorganic chemistry ,General Engineering ,Oxide ,General Physics and Astronomy ,Graphite oxide ,Chemical vapor deposition ,law.invention ,chemistry.chemical_compound ,chemistry ,Chemical engineering ,law ,Monolayer ,General Materials Science ,Wafer ,Graphene oxide paper - Abstract
We report the fabrication of mosaic-like monolayers of graphene oxide (G-O) coated with tetrabutylammonium ions (TBA) using a simple spin-coating method. The TBA-coated G-O (TG-O) sheets were prepared by "spontaneous exfoliation" of graphite oxide intercalated with tetrabutylammonium hydroxide (TBAOH) in the wet state, without the need for sonication. Mosaic-like monolayers could be formed on a variety of substrates such as Si wafer (coated with the thin native oxide), SiO2/Si wafer, graphene grown by chemical vapor deposition and then transferred on SiO2/Si wafer, Au film on Si wafer, and Cu foil. The mosaic-like monolayer of TG-O was compared with monolayers of G-O and TG-O prepared using a Langmuir-Blodgett (LB) trough. The formation of the mosaic-like TG-O monolayer films was attributed to (1) weakening of the electrostatic repulsion between G-O sheets by TBA, and (2) prevention of the overlap and stacking of TG-O sheets by disruption of the hydrogen bonding between the basal plane of one sheet and the basal plane or edge of another, by adsorbed TBA. External reflection FTIR spectroscopy showed that spectral features of the mosaic-like monolayer of TG-O made by simple spin-coating were the same as those for the monolayer fabricated using the LB assembly, indicating the same spatial orientations of functional groups. This study provides a very simple route to a complete monolayer of G-O without the need for an LB trough.
- Published
- 2013
241. Automated knowledge-based detection of nonobstructive and obstructive arterial lesions from coronary CT angiography
- Author
-
Dongwoo, Kang, Piotr J, Slomka, Ryo, Nakazato, Reza, Arsanjani, Victor Y, Cheng, James K, Min, Debiao, Li, Daniel S, Berman, C-C Jay, Kuo, and Damini, Dey
- Subjects
Male ,Coronary Stenosis ,Reproducibility of Results ,Middle Aged ,Coronary Angiography ,Sensitivity and Specificity ,Pattern Recognition, Automated ,Radiographic Image Enhancement ,Artificial Intelligence ,Humans ,Radiographic Image Interpretation, Computer-Assisted ,Female ,Tomography, X-Ray Computed ,Algorithms - Abstract
Visual analysis of three-dimensional (3D) coronary computed tomography angiography (CCTA) remains challenging due to large number of image slices and tortuous character of the vessels. The authors aimed to develop a robust, automated algorithm for unsupervised computer detection of coronary artery lesions.The authors' knowledge-based algorithm consists of centerline extraction, vessel classification, vessel linearization, lumen segmentation with scan-specific lumen attenuation ranges, and lesion location detection. Presence and location of lesions are identified using a multi-pass algorithm which considers expected or "normal" vessel tapering and luminal stenosis from the segmented vessel. Expected luminal diameter is derived from the scan by automated piecewise least squares line fitting over proximal and mid segments (67%) of the coronary artery considering the locations of the small branches attached to the main coronary arteries.The authors applied this algorithm to 42 CCTA patient datasets, acquired with dual-source CT, where 21 datasets had 45 lesions with stenosis ≥ 25%. The reference standard was provided by visual and quantitative identification of lesions with any stenosis ≥ 25% by three expert readers using consensus reading. The authors algorithm identified 42 lesions (93%) confirmed by the expert readers. There were 46 additional lesions detected; 23 out of 39 (59%) of these were less-stenosed lesions. When the artery was divided into 15 coronary segments according to standard cardiology reporting guidelines, per-segment basis, sensitivity was 93% and per-segment specificity was 81% using 10-fold cross-validation.The authors' algorithm shows promising results in the detection of both obstructive and nonobstructive CCTA lesions.
- Published
- 2013
242. Onion and pizza
- Author
-
Donghee Lee, Jongmoo Choi, Dongwoo Kang, Sam H. Noh, Namsu Lee, and Sewoog Kim
- Subjects
Scheme (programming language) ,I/O virtualization ,Computer science ,Virtual machine ,Distributed computing ,Disk partitioning ,computer.software_genre ,Virtualization ,computer ,computer.programming_language - Abstract
The traditional disk partitioning scheme commonly used in a virtualization system divides a disk into multiple partitions in a coarse-grained manner, which causes a long seek distance when multiple virtual machines run concurrently. To overcome this drawback, we propose two novel schemes, called onion and pizza. The onion scheme makes partitions in an interleaved way, which leads to not only reduce a seek distance but also enhance fairness among virtual machines. The pizza scheme goes one step further that makes partitions in a vertical fashion, not a horizontal fashion, so that requests from different virtual machines can be served in a same cylinder. In additional, new sector mapping is devised for efficiency of the two schemes. Real implementation based experiments have shown that our proposal can enhance I/O bandwidth up to 95% with an average of 25%, compared with the traditional scheme.
- Published
- 2013
243. Image denoising of low-radiation dose coronary CT angiography by an adaptive block-matching 3D algorithm
- Author
-
Jonghye Woo, Ryo Nakazato, Daniel S. Berman, C.-C. Jay Kuo, Piotr J. Slomka, Damini Dey, and Dongwoo Kang
- Subjects
Matching (graph theory) ,medicine.diagnostic_test ,Computer science ,Noise reduction ,Angiography ,Image noise ,medicine ,Low dose ct ,Coronary ct angiography ,Image denoising ,Algorithm ,Block (data storage) - Abstract
Our aim in this study was to optimize and validate an adaptive denoising algorithm based on Block-Matching 3D, for reducing image noise and improving assessment of left ventricular function from low-radiation dose coronary CTA. In this paper, we describe the denoising algorithm and its validation, with low-radiation dose coronary CTA datasets from 7 consecutive patients. We validated the algorithm using a novel method, with the myocardial mass from the low-noise cardiac phase as a reference standard, and objective measurement of image noise. After denoising, the myocardial mass were not statistically different by comparison of individual datapoints by the students' t-test (130.9±31.3g in low-noise 70% phase vs 142.1±48.8g in the denoised 40% phase, p= 0.23). Image noise improved significantly between the 40% phase and the denoised 40% phase by the students' t-test, both in the blood pool (p
- Published
- 2013
244. Defect-Free, Highly Uniform Washable Transparent Electrodes Induced by Selective Light Irradiation.
- Author
-
Zhaoyang Zhong, Kyoohee Woo, Inhyuk Kim, Hyuntae Kim, Pyeongsam Ko, Dongwoo Kang, Sin Kwon, Hyunchang Kim, Hongseok Youn, and Jooho Moon
- Published
- 2018
- Full Text
- View/download PDF
245. Hard Spherocylinders of Two Different Lengths as a Model System of a Nematic Liquid Crystal on an Anisotropic Substrate
- Author
-
Osamu Haba, Yosuke Hyodo, Tomonori Koda, Yuichi Momoi, Dongwoo Kang, Koichiro Yonetake, Akihiro Nishioka, Youngseok Choi, and Musun Kwak
- Subjects
Anisotropic substrate ,Materials science ,Condensed matter physics ,business.industry ,Monte Carlo method ,General Physics and Astronomy ,02 engineering and technology ,Substrate (printing) ,021001 nanoscience & nanotechnology ,01 natural sciences ,Rod ,Condensed Matter::Soft Condensed Matter ,Optics ,Mean field theory ,Liquid crystal ,0103 physical sciences ,Molecule ,010306 general physics ,0210 nano-technology ,business ,Anisotropy - Abstract
In this article, we describe the effects of an anisotropic substrate on the alignment of a nematic liquid crystal. We examine how the substrate affects the alignment of a nematic liquid crystal by Monte Carlo simulation. The liquid crystal on a substrate was described by the phase separation of liquid crystal molecules and substrate molecules, both of which were modeled by hard particles. We used hard rods to represent both the liquid crystal and the substrate. The length of the hard rods representing the substrate was adjusted to represent the degree of substrate anisotropy. The results show that the nematic alignment could either be reinforced or weakened, depending on the length of the substrate rods. Mean field theory is used to analyze the simulation results. We confirmed that the distance over which the substrate affects the bulk liquid crystal is about 3 nm for the present hard-rod-based model.
- Published
- 2016
246. Erratum: API-Based Software Birthmarking Method Using Fuzzy Hashing [IEICE Transactions on Information and Systems Vol.E99.D (2016) , No.7 pp.1836-1851]
- Author
-
Donghoon LEE, Dongwoo KANG, Younsung CHOI, Jiye KIM, and Dongho WON
- Subjects
Artificial Intelligence ,Hardware and Architecture ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Software - Published
- 2016
247. Fine-grained I/O fairness analysis in virtualized environments
- Author
-
Jongmoo Choi, Dongwoo Kang, and Sewoog Kim
- Subjects
Exploit ,Virtual machine ,Computer science ,Distributed computing ,Component-based software engineering ,Fairness measure ,Reservation ,Hypervisor ,computer.software_genre ,Virtualization ,computer ,Scheduling (computing) - Abstract
A virtualized system consists of a couple of independent and isolated software components such as hypervisor, control domain, and virtual machines, which leads I/O requests to be processed across several protection domains. These complicated interdomain interactions frequently results in I/O performance degradation and fairness violation. In this paper, we propose a new virtualization-aware fine-grained I/O analysis framework and observe I/O behavior in a multimodal and multidimensional viewpoint. Our observations reveal that fairness is not preserved at various layers due to the combination of diverse causes including I/O scheduling and CPU scheduling. Also, some mechanisms that worked well in a non-virtualized system do not match well with a virtualized system, which contributes the unfairness. The observations trigger us to devise a simple fairness enforce technique that exploits the reservation and limit concepts to balance the I/O bandwidth shares among virtual machines. Real implementation based experimental results have shown that our proposal can enhance I/O fairness without considerable overheads.
- Published
- 2012
248. Reversibly light-modulated dirac point of graphene functionalized with spiropyran
- Author
-
Byeong Su Kim, Gwangwoo Kim, Hyeon Suk Shin, A-Rang Jang, Dongwoo Kang, Eun Kyung Jeon, and Dae Joon Kang
- Subjects
Nanostructure ,Materials science ,Indoles ,Light ,Transistors, Electronic ,General Physics and Astronomy ,Photochemistry ,law.invention ,chemistry.chemical_compound ,Photochromism ,law ,Materials Testing ,Molecule ,Scattering, Radiation ,General Materials Science ,Merocyanine ,Benzopyrans ,Particle Size ,Spiropyran ,business.industry ,Graphene ,Doping ,General Engineering ,Equipment Design ,Nitro Compounds ,Nanostructures ,Equipment Failure Analysis ,Refractometry ,chemistry ,Optoelectronics ,Graphite ,business ,Visible spectrum - Abstract
Graphene has been functionalized with spiropyran (SP), a well-known photochromic molecule. It has been realized with pyrene-modified SP, which has been adsorbed on graphene by π-π interaction between pyrene and graphene. The field-effect transistor (FET) with SP-functionalized graphene exhibited n-doping effect and interesting optoelectronic behaviors. The Dirac point of graphene in the FET could be controlled by light modulation because spiropyran can be reversibly switched between two different conformations, a neutral form (colorless SP) and a charge-separated form (purple colored merocyanine, MC), on UV and visible light irradiation. The MC form is produced during UV light irradiation, inducing the shift of the Dirac point of graphene toward negative gate voltage. The reverse process back to the neutral SP form occurred under visible light irradiation or in darkness, inducing a shift of the Dirac point toward positive gate voltage. The change of the Dirac point by UV and visible light was reproducibly repeated. SP molecules also improved the conductance change in the FET device. Furthermore, dynamics on conversion from MC to SP on graphene was different from that in solution and solid samples with SP-grafted polymer or that on gold nanoparticles.
- Published
- 2012
249. Highly efficient polymer light-emitting diodes using graphene oxide as a hole transport layer
- Author
-
Jin Young Kim, Seo-Jin Ko, Chang-Lyoul Lee, Hyunjung Lee, Hyeon Suk Shin, Dongwook Lee, Bo Ram Lee, Myoung Hoon Song, Dongwoo Kang, and Jungwoo Kim
- Subjects
Materials science ,business.industry ,Graphene ,Exciton ,General Engineering ,General Physics and Astronomy ,Anode ,Active layer ,law.invention ,chemistry.chemical_compound ,chemistry ,PEDOT:PSS ,law ,Optoelectronics ,General Materials Science ,business ,Luminous efficacy ,Layer (electronics) ,Poly(3,4-ethylenedioxythiophene) - Abstract
We present an investigation of polymer light-emitting diodes (PLEDs) with a solution-processable graphene oxide (GO) interlayer. The GO layer with a wide band gap blocks electron transport from an emissive polymer to an ITO anode while reducing the exciton quenching between the GO and the active layer in place of poly(styrenesulfonate)-doped poly(3,4-ethylenedioxythiophene) (PEDOT:PSS). This GO interlayer maximizes hole-electron recombinations within the emissive layer, finally enhancing device performance and efficiency levels in PLEDs. It was found that the thickness of the GO layer is an important factor in device performance. PLEDs with a 4.3 nm thick GO interlayer are superior to both those with PEDOT:PSS layers as well as those with rGO, showing maximum luminance of 39 000 Cd/m(2), maximum luminous efficiencies of 19.1 Cd/A (at 6.8 V), and maximum power efficiency as high as 11.0 lm/W (at 4.4 V). This indicates that PLEDs with a GO layer show a 220% increase in their luminous efficiency and 280% increase in their power conversion efficiency compared to PLEDs with PEDOT:PSS.
- Published
- 2012
250. Automatic detection of significant and subtle arterial lesions from coronary CT angiography
- Author
-
Victor Y. Cheng, Ryo Nakazato, C.-C. Jay Kuo, Debiao Li, Piotr J. Slomka, Dongwoo Kang, Daniel S. Berman, James K. Min, and Damini Dey
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Lumen (anatomy) ,Coronary ct angiography ,medicine.disease ,Coronary arteries ,Lesion ,Stenosis ,medicine.anatomical_structure ,Automated algorithm ,Angiography ,medicine ,Radiology ,medicine.symptom ,business ,Artery - Abstract
Visual analysis of three-dimensional (3D) Coronary Computed Tomography Angiography (CCTA) remains challenging due to large number of image slices and tortuous character of the vessels. We aimed to develop an accurate, automated algorithm for detection of significant and subtle coronary artery lesions compared to expert interpretation. Our knowledge-based automated algorithm consists of centerline extraction which also classifies 3 main coronary arteries and small branches in each main coronary artery, vessel linearization, lumen segmentation with scan-specific lumen attenuation ranges, and lesion location detection. Presence and location of lesions are identified using a multi-pass algorithm which considers expected or "normal" vessel tapering and luminal stenosis from the segmented vessel. Expected luminal diameter is derived from the scan by automated piecewise least squares line fitting over proximal and mid segments (67%) of the coronary artery, considering small branch locations. We applied this algorithm to 21 CCTA patient datasets, acquired with dual-source CT, where 7 datasets had 17 lesions with stenosis greater than or equal to 25%. The reference standard was provided by visual and quantitative identification of lesions with any ≥25% stenosis by an experienced expert reader. Our algorithm identified 16 out of the 17 lesions confirmed by the expert. There were 16 additional lesions detected (average 0.13/segment); 6 out of 16 of these were actual lesions with
- Published
- 2012
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.