68 results on '"Shahzad A"'
Search Results
2. Impact of central bank decisions and communications on sentiment, uncertainty, risk aversion and investment behaviour
- Author
-
Shaikh, Raja Shahzad
- Abstract
Central bank's policy decisions and communication influence financial markets through managing investor expectations related to the current and future economic scenario and achieve desired macroeconomic goals. This thesis empirically evaluates the role of signals given in the central bank's actions and communication in driving investor sentiment, formulating the expected risk premium and shifting the investment behaviour in financial markets. This thesis comprises of three empirical chapters focusing on the response of market participants to the central bank quantitative and qualitative announcements. Chapter 2 investigates the impact of the United States (US) and domestic monetary policy announcements on consumer and managers' confidence in the United Kingdom (UK) and 10 countries within the euro area during conventional and unconventional policy times. More specifically, using the confidence indicators of the European Commission, the study examines the response of consumers and managers to monetary policy surprises around the global financial crisis. The findings confirm that during the conventional policy period, the domestic expansionary shock has a significant positive impact on the consumer and manager confidence in the UK and across the ten countries in the euro area. Furthermore, the US conventional monetary policy has more impact on managers' sentiment compared to domestic policy. However, after the introduction of unconventional policy programme, the monetary announcements turn to be less effective in boosting the confidence of households and businesses. Chapter 3 analyses the influence of the Federal Reserve's (Fed's) communications on investors' risk perception and appetite in the global equity markets. The results suggest that the Fed's optimism (pessimism) decreases (increases) the market-wide uncertainty and investors' risk aversion not only in the US but also in the UK and the euro area. In addition, investors respond to the signals inbound in the communications more significantly during recessionary and uncertain times. Moreover, after estimating unique topics and their relative tone from the Fed's commutations, this chapter finds that investors pay attention particularly to the discussion related to the financial market, credit conditions, employment, and economic growth in forming their response. Finally, investors react heterogeneously to the discussion about prospering economic outlook and future contractionary policy. Chapter 4 investigates the effect of the Fed's communications on the returns and traders' positions in the commodity markets. Using computational linguistic analysis, this study extracts the policymakers' indication of the future path of the policy rate. This study documents that the degree of hawkishness in the Fed's communications decreases the one month ahead returns on metal, energy and overall commodity indexes. In addition, the Fed's hawkish tone increases (decreases) the commodity traders' speculating (hedging) positions. This implies that the central bank tone contains information about the economic conditions and provides signals about the future path of the policy which drive the traders' positions and affect the commodity returns. Furthermore, a topic modelling analysis of the central bank communications reveals that a hawkish discussion about consumption, financial market, and inflation plays a particularly important role in influencing the commodity returns and traders' positions.
- Published
- 2021
3. The utility of counterinsurgency in Balochistan (2013-2019) by the Pakistani Security Forces for achieving the safety of the China Pakistan Economic Corridor (CPEC)
- Author
-
Siddiqui, Khurram Shahzad
- Subjects
DS Asia - Abstract
This study examines the utility of the Pakistani Army's Counterinsurgency (COIN) Strategy (2013-2019) in the eradication of perceived threats facing the China-Pakistan Economic Corridor (CPEC) in Balochistan during the ongoing fifth round of insurgency which started in 2006. The year 2013 is a landmark because a MoU for CPEC was signed between Pakistan and China, the same year as the Pakistani Army first promulgated its new counterinsurgency doctrine. This study analyses the institutional learning process of the Pakistani Army, which ultimately resulted in the promulgation of the COIN doctrine and the extent to which the Army adheres to this doctrinal approach in Balochistan. It empirically investigates the efficacy of the COIN strategy in Balochistan after 2013 concerning CPEC security by using David Kilcullen's 'three pillars of counterinsurgency model' as the conceptual framework. The thesis argues that the COIN approach in Balochistan significantly changed after conceptualising the doctrine, especially from 2016 onwards, from 'butcher and bolt' to the inclusion of critical components like political primacy, affect-based and focused use of force, winning 'hearts and minds' and rules of engagement. As a result, there was a marked reduction in violence, fatalities and an increased number of insurgent surrenders. This thesis concludes that the Pakistani Army has largely controlled the insurgency in Balochistan, but at the same time, the reduction of tangible support to the insurgents through the porous borders and an effective strategy to break the nexus of the Islamic State of Khorasan (ISK) in Balochistan is urgently required to end the insurgency and ensure CPEC's security.
- Published
- 2020
4. An investigation into Human Resource Development (HRD) needs of nurses : the case of public health sector, Pakistan
- Author
-
Shahzad, Rana U.
- Subjects
Training and development ,Social skills ,Nurses ,Nursing ,Continuous professional development ,Training evaluation ,Human Resource Development (HRD) ,Pakistan - Abstract
The research investigates the health services of Pakistan by exploring current Human Resource Development (HRD) practices and social skills training opportunities for the development of nursing staff. The research aims to explore the best practice in social skills and competency development through HRD activities by detailing a project to identify the learning needs of registered nurses leading to improved quality care services. An exploratory research approach has been adopted to achieve research objectives. This mixed method oriented research, is primarily quantitative case study, supplemented by qualitative interviews to validate and enrich data findings from questionnaires to substantiate the research. The data was collected through 600 questionnaires and 10 interviews from five major public hospitals of Lahore, Pakistan. The research has identified multiple and diverse challenges of inadequate and improper HRD infrastructure, transformational leadership and participative style of management is resulting into degenerating attitudes and negative behaviours thus causing further slump. These counterproductive elements are failing to imbibe positive social skills and abilities in nursing staff resulting in creating impediments in deliverance of quality care services. This clearly indicates that there is no policy in place therefore, based on empirical evidences, as well as critical review of the literature, it proposes a model for achieving critical social skills development through training and development in order to achieve quality care standards based on the broad and long-term perspective of the strategy of input, process, output and outcome to support nursing sector, social skills development in particular to achieve optimum quality care objectives.
- Published
- 2020
5. Pruned hierarchical local model networks for nonlinear system identification : neuro-fuzzy local model network-based nonlinear system identification using maximum likelihood partitioned hierarchical model trees and backward elimination pruning for structure optimisation
- Author
-
Shahzad, Aitshaam
- Subjects
003 ,System Identification ,Nonlinear Systems ,Neuro-Fuzzy Models ,Local Model Networks ,Tree-Structured Networks ,Time Series Analysis ,Information Theory ,Static and Dynamic Systems - Abstract
Mathematical models form the basis of application in a multitude of processes and disciplines. With there being a recent and general trend of increased system complexity through added dimensionality and new innovations in technology, conventional characterisation methods fall short in many key areas. Consequently, it is necessary to address these shortcomings through the development of modelling methods which allow for the characterisation and investigations of the features of these systems. Local Model Networks (LMNs), a subset of the Neuro-Fuzzy modelling method, have become increasingly popular as a solution to this problem of Nonlinear System Identification. This thesis introduces a novel procedure for the identification of such systems, which returns a Pruned Hierarchical Network. Herein referred to as PRUHINET, the algorithm operates using hierarchical tree construction and returns a structure which is constituted of neurons containing local models each with an associated region of activation. The operational input space of the system is partitioned using an axis-oblique strategy, however unlike previous deployments, the employed partition method is predicated upon Maximum Likelihood Estimation (MLE). Analytical gradients are used to speed up the required nonlinear optimisation process. PRUHINET proposes various LMNs of varying complexity levels and utilises Information Theoretic Criterion (ITC) for the determination of the optimal network structure. This is addressed through the termination of the model build and the removal of redundant neurons via a Backward Elimination Pruning (BEP) approach. Multi-Model Inference (MMI) is used across the candidate LMNs to further mitigate model selection uncertainty and provide final response prediction. PRUHINET also allows for the consideration of systematic correlations within the supplied dataset by whitening the model errors through an iterated Feasible Generalised Least Squares (FGLS) approach external to the LMN build. The utility of the approach is shown through the identification of various dataset examples consisting of static and dynamic elements. The static simulation results illustrate the functionality of PRUHINET as an evolution of traditional approaches, providing superior performance and also being able to return results representative of classical methods under certain configurational assumptions. Validation results for the dynamic dataset showed the approach was able to identify the given system to an accuracy greater than 95% in all cases. Finally, the PRUHINET approach was shown to allow scrutinisation of the identified local models, which is of benefit in application.
- Published
- 2020
- Full Text
- View/download PDF
6. Investigating alignment between pedagogic policy and practice : an English language programme evaluation at secondary level in Pakistan
- Author
-
Karim, Shahzad and Harwood, Nigel
- Subjects
428.0071 - Abstract
This study explores the level of alignment between pedagogy in policy (the pedagogical practices stipulated in the national curriculum for the English language) and pedagogy in practice (the pedagogical practices embodied in the textbooks and enacted by teachers in the classroom) with regard to English language education (ELE) at the secondary level (grade 9-10) in government schools in Punjab, Pakistan. The study is designed against the backdrop of the ELE reforms that formed a part of the larger Education Sector Reforms programme introduced in 2001-2005 in Pakistan. Under the ELE reforms, a new curriculum for the English language was introduced that advocated a new pedagogical policy for English language teaching and new English language textbooks were developed for primary to secondary levels which aimed to align with the pedagogy espoused in the national curriculum. The study consisted of: i) a qualitative content analysis of the national curriculum for the English language to determine its pedagogical policy; ii) an analysis of the secondary level English language textbooks to determine their pedagogical practices and these practices’ alignment with the practices stipulated in the national curriculum; iii) observing 12 teachers’ English language lessons to examine their compliance with the national curriculum-mandated pedagogical practices, and iv) post-observation interviews with the teachers to inquire into their rationale for the pedagogical practices they used in their lessons. The findings reveal that the national curriculum recommends a suite of 15 pedagogical principles which mainly emphasise the use of a communicative, learner-centred, and inductive pedagogy. The textbook analysis reveals that the textbooks partially comply with the stipulated pedagogical policy, embodying wholly or partially nine principles as espoused in the national curriculum. The findings from the classroom observations reveal teachers’ low level of compliance (29%) with the recommended pedagogical policy. Some of the main reasons for this are examination, institutional, and social constraints.
- Published
- 2020
7. Signalling quality : an assessment of the effectiveness of regulator quality ratings for care homes
- Author
-
Shahzad, Muhammad W.
- Subjects
362.11 ,Care Homes ,Ratings - Abstract
When making a choice regarding care homes, potential service users or their family members face problems associated with information asymmetry. Given the credence nature of care home services, many aspects of quality cannot be assessed prior to use. In the absence of a mechanism to assess service quality two main problems can arise. First, users are not able to distinguish between good-quality providers and poor-quality providers, and second, providers of good-quality services are unable to credibly provide accurate information to users to distinguish them from poor-quality providers. In such a situation, both users and providers of high-quality services will welcome measures or effective ‘signals' that would allow them to differentiate service quality. In England, this role is fulfilled by the Care Quality Commission, through inspections and provision of quality ratings. Quality ratings produced by the regulator have a dual purpose. They provide information about quality for potential stakeholders who may be interested in using the service and they provide feedback for care home service providers to encourage service quality improvement. If quality ratings are to effectively remove problems of information asymmetry, they must be effective in both purposes. Using signalling theory as the basis for a range of quantitative methods, this research assesses the effectiveness of quality ratings provided by the regulator for all care homes, as market signals of quality. Based on the assessment of the three dimensions of an effective signal (signal cost, signal observability and signal consistency), the study finds that regulator quality ratings may not be an effective market signal. Whilst the study finds that quality ratings of some service providers improves after inspection, no improvements are made by a greater number of care homes. Furthermore, the study also finds a positive association between quality ratings and demand for care home places, however, the evidence also suggests that changes in demand are caused mostly by the regulator's own enforcement actions. In addition to having some issues with internal consistency between quality ratings and inspection reports, evidence suggests that regulator quality ratings are mainly fulfilling the role of a feedback signal for regulatory scrutiny. There is a need for the regulator to consider the informational gain from its quality ratings for potential service users of care homes. If service users are to benefit from quality ratings and inspection reports, these must be internally consistent and easily understandable by those accessing the information.
- Published
- 2020
- Full Text
- View/download PDF
8. Diet and risk of acute myocardial infarction in Bangladesh : the Bangladesh Risk of Acute Vascular Events (BRAVE) study
- Author
-
Shahzad, Sara and Chowdhury, Rajiv
- Subjects
616.1 ,Diet ,Cardiovascular disease ,South Asians - Abstract
Background: Coronary Heart Disease (CHD), with myocardial infarction (MI) as its main manifestation, is increasing at an alarming rate in South Asian countries, however evidence on its determinants is sparse. Dietary risk explains about one-third of global mortality and is a most important modifiable risk factor for CHD. Although there is extensive evidence on diet and risk of CHD from western populations, this cannot be generalised to South Asian populations where the dietary habits are very diverse. Objectives: The main aims of this thesis are to (1) summarise existing epidemiological evidence on diet and risk of CHD in South Asians; (2) characterise in detail the lifestyle socio-demographic and other correlates of dietary factors in a South Asian population; (3) investigate the association of dietary food groups, patterns and nutrients with the risk of MI and (4) discuss public health implications of the findings. Methods: BRAVE is a hospital-based case-control study from Dhaka, Bangladesh which has about 8000 cases and 8000 controls frequency matched by age and sex. This study has overlapping data looking at lifestyle (including dietary determinants), biochemical, genetic and environmental risk factors for acute MI (AMI). Using data from this study dietary determinants of AMI were investigated through (1) cross-sectional analyses of the association of diet with various correlates and (2) case-control analyses with risk of MI. Results: The systematic review demonstrated that there was scarce evidence on diet and risk of CHD from South Asia. Cross-sectional analyses from BRAVE study demonstrated that dietary food groups, patterns and nutrients had different associations with the various characteristics showing the role of modest confounding. There were few strong correlations between food groups, nutrients and dietary patterns. Findings from food group analyses showed an inverse association between fruits, vegetables, yoghurt, certain spices and risk of AMI. In contrast, higher consumption of biryani and fish was associated with higher risk of AMI. Three distinct dietary patterns were identified using principal component analysis; the "energy dense pattern", the "vegetable pattern" and the "fruits and dairy pattern". The vegetable pattern and fruit and dairy pattern had an inverse association with the risk of AMI. In contrast, "energy dense pattern" had no significant association with the risk of AMI. As for the analyses on dietary nutrients, higher intake of refined carbohydrates was not associated with the risk of AMI, while non-refined carbohydrates were associated with lower risk of AMI. Animal protein showed a higher risk of AMI, whereas plant protein showed a weak inverse association. As for specific fatty acids, modest intakes of saturated fatty acid from dairy sources and polyunsaturated fatty acids were associated with a slightly lower risk of AMI. In contrast monounsaturated fatty acids showed an increased association only in highest quintile. Conclusions: The present analyses are the largest detailed study on diet and CHD solely based on a South Asian population. It confirms previous observed association of some food groups with CHD in western populations and has also yielded some novel insights on the association of diet with CHD specific to Bangladesh. However, owing to an observational nature of the study a causal assessment could not be done, findings of this study stimulates further detailed work including prospective cohort studies which may have important potentials for the local dietary guidelines in Bangladesh and in similar settings to help reduce the rising burden of CHD.
- Published
- 2020
- Full Text
- View/download PDF
9. Facial detection of genetic disorders
- Author
-
Alvi, Mohsan Shahzad, Zisserman, Andrew, and Nellaker, Christoffer
- Subjects
618.92 ,computer vision - Abstract
An estimated 400,000 children are born every year with rare genetic disorders that significantly affect their quality of life. Early detection and intervention can significantly improve the quality of life of these children. Craniofacial characteristics contain highly useful information for clinical geneticists for diagnosis. This thesis investigates the use of computer vision to aid in the automatic detection of genetic disorders from ordinary facial photographs. This is a non-trivial task, in part due to patient privacy concerns and the scarcity of training data. In the following, we present several approaches to overcome these challenges. First, we present a method for creating realistic-looking average faces for individuals sharing a syndrome. These averages remove identifiable features, but retain clinically relevant phenotype information and preserve facial asymmetry. This procedure is completely automated, removing the need to expose patient identities at any point during the process, and could be used to help facilitate facial diagnosis in clinical settings. We also investigate creating transitions between averages and exaggerated caricature faces to highlight phenotype differences between patient groups. Second, we investigate the classification of eight genetic disorders with shallow and deep representations. We compare shape and appearance descriptors based on local and dense descriptors and report significant improvements upon previous work. Furthermore, we made use of transfer learning and part-based models to train convolutional networks for syndrome classification. Our results show that deep learning can be used in the context of classifying genetic disorders, and is superior to shallow descriptors, despite small training datasets. Neural networks are prone to learning biases present in training datasets and basing their decisions on them. This is particularly relevant for training on small datasets, as is the case in the domain of genetic disorders. We introduce a bias removal algorithm that aims to overcome this challenge. We report three distinct contributions. First, to ensure that a network is blind to a known bias in the dataset, second, to improve classification performance when faced with an extreme bias, and third, to remove multiple spurious variations from the feature representation of a primary classification task. Lastly, we introduce a novel image augmentation method for learning a deep face embedding, the “Interpolated Clinical Face Phenotype Space”, that aims to describe clinically relevant face variation. Our contributions are two-fold: 1) Interpolations between faces that share a class improve deep representation training from small datasets. 2) Between-class interpolations that model the space between classes improve the generalisation performance of the deep representation to unseen syndromes.
- Published
- 2019
10. The role of perivascular adipose tissue in vascular function : how hyperglycaemia and adiposity affect vascular control
- Author
-
Saleem, Mohammad Shahzad
- Subjects
612.1 ,QP Physiology - Abstract
Hyperglycaemia associated with diabetes may have detrimental effects on vascular function. Diabetes may be accompanied by obesity which can potentially compound impaired vascular function by altering the physiological state of adipose tissue. Perivascular adipose tissue (PVAT), the exterior covering layer of most blood vessels, is receiving interest as a paracrine modulator of vascular function. Most conventional pharmacological studies dissect off the adherent adipose tissue and so this aspect of vascular control is often neglected. The present study aimed to investigate the effects of hyperglycaemia and PVAT on control of the porcine coronary artery (PCA). In vitro studies were carried out, using PCAs obtained from the abattoir, in organ-bath set ups. Exposure of PCAs to acute hyperglycaemia (22 mM) caused a significant contractile response, which was similar to that caused by the osmotic control (mannitol) and which was attenuated by superoxide dismutase. Superoxide production was detected in the buffer solution incubated with PCAs during hyperglycaemia. These findings suggest that acute hyperglycaemia increased PCA contractility by inducing oxidative stress, which involved superoxide production. Osmotic stress may possibly have contributed to hyperglycaemia-induced vasoconstriction, which needs to be investigated in future work. The relaxant responses of PCAs to the NO donor (SNP) in the presence of PVAT showed significant potentiation, compared to the vessels without PVAT. Inhibition of NOS in PCAs (denuded of endothelium) led to a contractile response, which was significantly greater in the presence of PVAT. The Griess reaction detected the presence of nitrite in buffer solutions incubated with PVAT. Moreover, the expression of eNOS was identified in PVAT using Western blotting. These data indicate that the PVAT of PCAs released the relaxant factor NO. Exposure of cleaned PCAs to PVAT significantly increased the basal tone of the vessels which was significantly attenuated in the presence of a thromboxane A2 (TXA2) receptor antagonist. In addition, PVAT enhanced the contractile responses to 4-AP-induced inhibition of vascular voltage-activated K+ channels (Kv) channels. This enhancement was attenuated following TXA2 receptor inhibition. These findings point to the release of TXA2 from PVAT, which had a contractile effect by augmenting the closure of Kv channels of PCAs. In addition, isometric tension studies showed that the maximal endothelium-dependent vasorelaxation to cumulative bradykinin was significantly inhibited in the presence of exogenous angiotensin II and PVAT. The later effect was ameliorated by inhibition of the angiotensin II, type 1 (AT1) receptor. ELISA showed the presence of angiotensin II in PVAT. However, Western blotting carried out to detect the expression of ACE1 (which converts angiotensin I to angiotensin II) in PVAT showed non-specific bands and was inconclusive. Angiotensin II may have been released from PVAT which interfered with the endothelium-dependent relaxation responses. In conclusion, the present study has shown that hyperglycaemia influenced the function of PCAs by causing a contractile response possibly mediated by induction of oxidative stress. Moreover, PVAT impacted on function of the adjacent vascular smooth muscle plausibly via release of the relaxant factor NO and the contractile factor TXA2. Finally, PVAT-derived angiotensin II may have inhibited the function of endothelium of PCAs in a paracrine manner. Future studies in porcine and human coronary arteries will help to further investigate this area.
- Published
- 2019
11. HEAT-PPCI : how effective are antithrombotic therapies in primary percutaneous coronary intervention : a randomised controlled trial comparing unfractionated heparin and bivalirudin
- Author
-
Shahzad, Adeel, Stables, Rod, and Mitchell, Jane
- Subjects
616.1 - Abstract
Aims: Bivalirudin, with selective use of glycoprotein (GP) IIb/IIIa inhibitor agents, is an accepted standard of care in primary percutaneous coronary intervention (PPCI). We performed a trial to compare antithrombotic therapy with bivalirudin or unfractionated heparin (heparin) during PPCI. We also planned pre-specified secondary analyses comparing antiplatelet and antithrombotic effects of bivalirudin and heparin and the effects of P2Y12 inhibiting agents on platelet reactivity and clinical events. Methods: This was a single centre, open label, randomised controlled trial. We used a strategy of delayed consent and consecutive patients were included without initial discussion. Before angiography, patients were randomised to either heparin (70 units/kg) or bivalirudin (bolus 0·75 mg/kg; infusion 1·75 mg/kg/hour). Patients were followed for 28 days. The primary outcome measure was a composite of all-cause mortality, cerebrovascular accident (CVA), reinfarction or unplanned target lesion revascularization (TLR). The primary safety outcome was the rate of major bleeding - type 3-5 as per the Bleeding Academic Research Consortium (BARC) definitions. For patients recruited during working hours, we assessed ADP-induced platelet aggregation at the end of the index procedure and at 24 hours. The effects of P2Y12 inhibitors on the primary and safety outcomes were assessed in all patients. Findings: A total of 1829 patients were randomised (100% of eligible patients and 97% of all PPCI-related procedures). A PCI procedure was performed in 82% of cases. The rate of GP IIb/IIIa inhibitor use was: bivalirudin 13·5%, heparin 15·5%. The primary efficacy outcome measure was observed more frequently in patients treated with bivalirudin (8·7% v 5·7%, absolute risk difference = 3%; relative risk [RR] 1·52, 95% confidence interval [CI], 1·09 to 2·13; P=0·01). All elements of the composite favoured heparin but the difference was mainly related to an increased incidence of stent thrombosis with bivalirudin (3·4% v 0·9%, RR 3·91, 95% CI 1·61 to 9·52; P=0·001), causing reinfarction events. There was no difference in major bleeding (3·5% bivalirudin v 3·1% heparin; RR 1·15, CI 0·70 to 1·89; P=0·59). There were no significant differences between patients who received heparin and bivalirudin in AA- and ADP-mediated platelet aggregation at the end of procedure (EOP) or at 24 hours. Multiple Electrode Aggregometry (MEA) data from 469 patients for antiplatelet therapy showed that prasugrel therapy resulted in significantly greater suppression of ADP-induced platelet aggregation at 40U (23, 78) when compared against ticagrelor 75U (41, 100.75) or clopidogrel 79U (56, 96); p < 0.001. After adjustment of baseline characteristics, there were no significant differences in the rates of MACE or major bleeding between the antiplatelet therapies. Conclusions: Use of heparin, rather than bivalirudin, is associated with a reduced rate of major adverse ischaemic events, with no increase in bleeding complications. More systematic use of heparin offers the potential for a substantial reduction in drug cost. There were no significant differences between heparin and bivalirudin with respect to AA- or ADP-mediated platelet aggregation. No significant differences were found between the antiplatelet therapies for clinical outcomes.
- Published
- 2019
- Full Text
- View/download PDF
12. The influence of credit risk management strategies on the performance of commercial banks : a comparative case study of UAE and UK commercial banks
- Author
-
Karim, Shahzad and Rowlands, Hefin
- Abstract
This study undertakes a comparative investigation of the influence and adoption of credit risk management strategy on the performance of commercial banks in the United Arab Emirates (UAE) and the United Kingdom (UK). The research assesses the uses and approaches to credit risk management in the UAE in comparison to the UK, beginning with a thematic literature review that identified key theories, strategies and principles of the extant credit risk assessment literature, whilst contextualising the distinctiveness of Islamic banking. Adopting a deductive ontological and positivist epistemological position, the research prioritised an 'action research' design that used both quantitative and qualitative data within a mixed methods research design. Using non-probability convenience sampling, primary data was first collected from 100 middle-level bank managers (50 managers from the UK and 50 managers from the UAE) by means of a self-administered questionnaire. Qualitative data was subsequently collected from 20 top managers (10 managers from Emirati banks and 10 managers from UK banks) through semi-structured interviews. From the analysis of this data, 18 key variables were identified and defined across three categories: credit risk management strategies, factors influencing risk management and commercial bank profitability. This research contributed to the limited literature on credit risk management in conventional versus Islamic banks, and the research findings present a novel comparative analysis of the differences between UK commercial banks and Emirati financial institutions and two key differences were identified. First, the results showed that Emiratis banks prioritised financial statement analysis and credit score analysis in their credit risk management, while UK banks prioritised credit portfolio models and exposure limits. Second, in respect to organisational profitability, the Emirati banks implementing creditworthiness analysis and internal ratings to measure their potential credit risks achieve higher returns on equity, compared to those in the UK who use stress testing and exposure limits. The research has policy implications for Emirati financial institutions, such as the exploration and adoption of more profitable risk management strategies and assessment techniques and also provides valuable information to researchers who are interested in understanding the role of credit risk management in organisational profitability in both the conventional and Islamic banking sector.
- Published
- 2019
- Full Text
- View/download PDF
13. Vulnerability of short range wireless technologies to impulsive noise in electricity substations
- Author
-
Bhatti, Shahzad Ahmed and Glover, Ian Andrew
- Subjects
621.3 - Abstract
The technical reliability and economic advantages of using sensors, communications and computing to more precisely monitor and control the state of electrical power systems are many. Implementing some of the communications functions wirelessly is cheaper, more flexible and more convenient than an implementation with their wired counterparts. Whilst wireless networks offer these obvious benefits over wired networks, concerns remain which need to be addressed. One such concern is the performance of wireless networks in the electromagnetically aggressive substation environment; an environment that is particularly rich in impulsive noise due to the presence of partial discharge, power electronics switching and other transient processes. -- This thesis investigates the degree to which the dominantly impulsive noise environment of an electricity substation will degrade the performance of wireless technologies, primarily designed to operate in a Gaussian noise environment. -- The electricity-substation noise environment is modelled as both a Middleton class-A process and a symmetric α-stable process. Values of model parameters are estimated from a database of impulsive noise measurements made in a 400/275/132 kV air-insulated substation. Computer simulations are then employed to evaluate the physical layer bit-error-ratio performance of the candidate wireless networking technologies including WLAN, Bluetooth and Zigbee. -- All candidate technologies are shown to suffer a departure in performance degradation from that expected in a Gaussian noise environment in the high SNR region whereas AWGN dominates in the low SNR region. In the high SNR region, there appears to be a noise floor which reduces the effect of an increase in SNR on the corresponding BER.
- Published
- 2018
- Full Text
- View/download PDF
14. The Market in Poetry in the Persian World
- Author
-
Bashir, Shahzad
- Published
- 2021
- Full Text
- View/download PDF
15. Synthesis of new heterocycles by SNAr reactions of perfluoroarenes
- Author
-
Riaz, Shahzad
- Subjects
546 ,Fluorine ,Heterocycles ,Perfluoroarenes ,SNAr ,Cyclisation ,Aromatic - Abstract
The reactivity of perfluoroarenes and hetarenes towards SNAr reactions was studied as part of a synthetic programme to form an assembly of novel heterocyclic aromatic compounds for material and pharmaceutical applications. In chapter 1 the chemistry of perfluoroarenes is reviewed together with the use of conjugated compounds in organo-electronic applications. In chapter 2 the successful replacement of the remaining fluorine atoms in 6,12-difluorobenzo[1,2-b:4,5-b']bis[b]benzothiophene through SNAr reaction with long chain alkoxy and alkylthio nucleophiles is reported. An X-ray crystallographic investigation into their solid state packing was undertaken which would provide useful information for organoelectronic applications. Reactions with nitrogen and carbon based nucleophiles were also studied but met with little success. In chapter 3, alternative methods for the reductive cyclization of aryl and 2-bromoaryl perfluoroethers and sulfides to replace the currently used lithium-bromine exchange were explored, namely the use of radical cyclizations, palladium, magnesium, copper and Rieke metals. Some success was found using magnesium as a reagent although yields were low. Attempts to effect cyclisation reactions by ortho-lithiation and Ullmann coupling reaction with fluoroarenes is also reported. In chapter 4 attempts to generate alternative ring fusion in annulation reactions to form fused benzothiophenes by a dianion strategy are described. Development of methods to synthesise helicene or curved polycyclic structures from dibenzothiophene precursors is reported. In chapter 5 the synthesis of nitrogen containing fluorinated compounds with potential bioactivity is described. A series of novel amino substituted fluoroaromatics were successfully synthesised by adding different nitrogen based nucleophiles to pentafluoropyridine. Smiles rearrangement of a tetrafluoropyridyl sulphonamide was found to occur. A number of fluoropyridyl aniline derivatives were successfully synthesised some of which were submitted for biological screening. Substitution reactions of bis-nucleophiles bearing two heteroatom groups to form fused six membered rings were also studied. A Smiles rearrangement was identified in the reaction with an aminobenzenethiolate and confirmed by X-ray crystallography. Experimental procedures are given in chapter 6 as well as characterisation and crystallographic data of molecules synthesised during the research.
- Published
- 2016
16. High-throughput assessment of small open reading frame translation in Drosophila melanogaster
- Author
-
Mumtaz, Muhammad Ali Shahzad
- Subjects
570 ,QH0447 Genes. Alleles. Genome - Abstract
Hundreds of thousands of putative small ORFs (smORFs) sequences are present in eukaryotic genomes, potentially coding for peptides less than 100 amino acids. smORFs have been deemed non-coding on the basis of their high numbers and their small size that makes it extremely challenging to assess their functionality both bioinformatically and biochemically. The recently developed Ribo-Seq technique, which is the deep sequencing of ribosome footprints, has generated significant controversy by showing extensive translation of smORFs outside of annotated protein coding regions, including putative non-coding RNAs.. Our lab adapted the Ribo-Seq technique by combining it with the polysome fractionation in order to assess smORF translation in Drosophila S2 cells. This thesis provides a high-throughput assessment of smORF translation in Drosophila melanogaster by firstly implementing complementary techniques such as transfection-tagging and Mass spectrometry methods in order to provide an independent corroboration of the S2 cell data (Chapter 3). Secondly, the in order to expand the catalogue of smORFs that are translated, I significantly improve upon the yield and sequencing efficiency of the Poly-Ribo-Seq protocol while adapting it to Drosophila embryos and then implementing it across embryogenesis divided in to Early, Mid and Late stages (Chapter 4). Currently, there is still a lot of debate in the field with regards to Ribo-Seq data analysis, and various computational metrics have been developed aimed at discerning 'real' translation events to background noise. Chapter 5 explores some of the metrics developed and establishes a translation cut-off suitable for designating small ORFs as translated. Altogether, the improvements introduced to the protocol and my data analysis shows the translation of 500 annotated smORFs, 500 smORFs in long non-coding RNAs and 5,000 uORFs, of which only one-third of each type of smORF has previous evidence of translation. These findings strengthen the establishment of smORFs as a distinct class of genes that significantly expand the protein coding complement of the genome.
- Published
- 2016
17. New techniques in the management of transient loss of consciousness (TLOC) or blackout
- Author
-
Anwar, Amir Shahzad, Cooper, Paul, and Fitzpatrick, Adam
- Subjects
362.1968 - Abstract
Collapse is defined as an “abrupt loss of postural control” and is very common presentation to primary and secondary care. It accounts up to 3% of emergency department cases, and 6% of hospital admissions. Many patients are labelled with “collapse?cause”. It should be appreciated that collapse can be with or without TLOC/blackout. Causes without TLOC include falls, transient ischemic attacks, cerebrovascular accidents, road traffic accidents, metabolic abnormalities and intoxication. However, most collapse patients have TLOC. Most common causes are syncope, epilepsy or psychogenic blackouts. There are many similarities and overlap of clinical features leading to misdiagnosis. There are huge variations in the ways TLOC patients are assessed and managed. Patients are dealt by different specialties in different clinical settings. There is lack of clinical tools for assessment and poor risk stratification. Most clinicians take a “safe approach” and as a result, TLOC patients are often admitted to hospital unnecessarily and over investigated, which can increase confusion and healthcare cost. We have therefore tried to approach these issues via a dedicated “Rapid Access Blackout Triage Clinic” (RABTC). In this thesis, we have addressed the problem of TLOC in five projects arising from the triage of patients seen in that clinic. Chapter 1 expands the scene-setting for the thesis. Chapter 2 reports outcomes of a specialist nurse-lead RABTC. The clinic uses custom clinical evaluation and risk stratification tools for patients with TLOC with cardiologist supervision (author). Nearly two thirds of patients presenting to the RABTC are over 65 years. Chapter 3 reports outcome of pacemaker insertions in elderly patients for minor ECG abnormalities that are not current indications for pacemaker insertion. We speculated that such abnormalities could progress suddenly and transiently at the time of TLOC. Patients underwent pacemaker implantation directly avoiding further investigations, delay, and the risk of further blackouts and injury. Large numbers of patients with blackouts referred to the RABTC have had many investigations elsewhere with no conclusion. In chapter 4, we studied the effect of long term insertable ECG monitor (ILR) which can help making early diagnosis and avoid unnecessary investigations. We explored the impact of the ILR on time to Symptom/ECG correlation and time-to-diagnosis. There remains nearly half of the patients where even ILR is unable to explain the TLOC. Ideally, ILR would detect ECG, Blood Pressure and the Electroencephalogram, (EEG). These physiological parameters would be sufficient to distinguish between syncope, epilepsy and psychogenic blackouts. In Chapter 5 the results of in-depth analysis of the ECG in these patients are presented. Heart rate variability was used to calculate sympathovagal balance. The patients were recruited using video telemetry data from a Regional epilepsy centre. Finally, treatment of TLOC depends on its underlying cause and by far the most common cause is reflex syncope. So far, no treatment has proven benefit in this situation. One drug, midodrine an alpha-adrenoceptor agonist, has had several albeit unsatisfactory randomised controlled trial. We describe our experience of midodrine in this condition in Chapter 6. Chapter 7 summarises what has been contributed by this thesis.
- Published
- 2016
18. Visualisation of bioinformatics datasets
- Author
-
Mumtaz, Shahzad
- Subjects
572.8 - Abstract
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
- Published
- 2015
- Full Text
- View/download PDF
19. Investigating energy transport in high density plasmas using buried layer targets
- Author
-
Shahzad, Mohammed and Tallents, Greg
- Subjects
530 - Abstract
The work presented in this thesis investigates energy transport in laser irradiated solid targets containing a diagnostic buried iron layer. Energy transport in laser-plasmas is important to inertial confinement fusion and other applications, for example laser ablation, particle acceleration and x-ray production. The steep temperature and density gradients between the critical density (maximum penetration density for the laser) and ablation surface, plus the role of fast electron and radiation make energy transport in laser-plasmas a complex, non-linear issue. Laser energy can be transported into a solid target by thermal conduction, hot electron heating and radiation transport. To understand the in- terplay between these non-linear heating processes it is important to accurately characterise plasma conditions as the energy transport occurs. An experiment conducted at the Lawrence Livermore National Laboratory, USA irradiated buried iron layer targets using a 2 ps, 1017 Wcm−2 laser with the subsequent L-shell iron emission recorded using a high resolution (resolving power ≃ 500) grating spectrometer. The HYADES 1D hydrodynamic fluid code and the PrismSPECT collisional-radiative code were used to simulate the plasma conditions and the L-shell iron emission. A comparison between the simulated spectra and experi- mentally recorded L-shell emission suggests that the iron layer is heated instanta- neously by hot electrons and radiation transport and that this modifies thermal electron conduction. The thermal flux limiter and laser energy-hot electron conversion efficiency have been determined by comparing experimentally recorded L-shell emission to simulated synthetic spectra. As the iron layer expands and cools, the population of lower ionisation states increases. A novel technique has been developed to characterise the electron temperature and density from L-shell emission spectra using the Saha-Boltzmann equation and multiple line ratios of adjacent ionisation states. An experiment at the LASERIX facility, France used an extreme ultraviolet (EUV) laser as a back-lighter, to probe high density laser irradiated buried iron layer targets. The transmission through the iron layer was simulated using TOPS, PROPACEOS, IMP and HYADES opacity models. This investigation has found that higher opacities are required for plasmas at 20 eV and 0.3 gcm−3 in order to account for the drop in transmission at 20 ps after laser irradiation. Radiation transport dominates the heating of the buried iron layer when irradiated by a well defined prepulse. The expanding coronal preplasma efficiently produces hot electrons, however because of the larger stopping distance associated with ’superthermal’ electrons, the heating due to hot electrons is negligible compared to the radiation heating effect.
- Published
- 2015
20. An investigation of mechanisms to mitigate zero-day computer worms within computer networks
- Author
-
Shahzad, Khurram, Woodhead, Steve, and Bakalis, Panayiotis
- Subjects
620 ,QA Mathematics ,TK Electrical engineering. Electronics Nuclear engineering - Abstract
An Internet worm replicates itself by automatically infecting vulnerable systems and may infect hundreds of thousands of hosts across the Internet in tens of minutes. The speed of propagation of a worm is significantly higher than many other types of malware, including viruses. The potential for signification damage within a short time is therefore great. Worm detection and response systems must, therefore, act quickly to identify and counter the effects of worms. In this thesis, an investigation of mechanisms to mitigate zero-day computer worms has been carried out, while defining the key research questions to answer. This thesis presents a novel distributed automated worm detection and containment scheme, RL+LA, developed during the course of this research, that is based on the correlation of Domain Name System (DNS) queries against the destination IP address of outgoing TCP SYN and UDP datagrams leaving the network boundary, while utilizing cooperation between different communicating scheme members using a custom protocol, which has been termed Friends. To the knowledge of author, this is the first implementation of such a scheme. A set of tools i.e. a Pseudo-Worm Daemon (PWD), which provides random scanning and hit-list worm like functionality; and a Virtualized Malware Testbed (VMT) for testing of worm experiments, were also developed in order to empirically evaluate the performance of the desired countermeasure scheme, RL+LA. A set of empirical experiments were conducted by using Pseudo-Slammer and Pseudo-Witty worms with real world attributes of Slammer and Witty worms in order to evaluate PWD. The experimental results are broadly comparable to real worm outbreak reported data. Furthermore, these results are compared with a biological epidemiological model (SI model) in order to explore the applicability of SI model to cyber malware infections in general, as well as to assess its usefulness in characterising the virulence of cyber malware. From base comparison of Pseudo-Slammer and Pseudo-Witty worm experimental results with reported outbreak data of Slammer and Witty worms; and SI model, it is concluded that: (a) PWD can be used as an effective tool to empirically analyze the propagation behaviour of random scanning and hit-list worms and to test potential countermeasures, (b) SI model can be effectively used in characterising the virulence of random scanning worms. Another comprehensive sets of empirical experiments were also conducted by using a Slammer-like pseudo-worm on a small scale with class C networks and on class A networks by using Pseudo-Slammer and Pseudo-Witty worms with real attributes of Slammer and Witty worms, without any countermeasures and by invoking RL and RL+LA countermeasures, in order to evaluate the performance of the proposed scheme, RL+LA. The experimental results show a significant reduction in the infection speed of the worms, when the countermeasure scheme is invoked.
- Published
- 2015
21. Individual thermal control in the workplace : cellular vs open plan offices : Norwegian and British case studies
- Author
-
Shahzad, Salome Sally, Brennan, John, and Theodossopoulos, Dimitrios
- Subjects
720 ,thermal comfort ,individual control ,workplace - Abstract
This research is based on the challenge in the field of thermal comfort between the steady state and adaptive comfort theories. It challenges the concept of standard ‘comfort zone’ and investigates the application of ‘adaptive opportunity’ in the workplace. The research question is: ‘Does thermal control improve user satisfaction in cellular and open plan offices? Norwegian vs. British practices’. Currently, centrally controlled thermal systems are replacing individual thermal control in the workplace (Bordass et al., 1993, Roaf et al., 2004) and modern open plan offices are replacing traditional cellular plan offices in Scandinavia (Axéll and Warnander, 2005). However, users complaint about the lack of individual thermal control (Van der Voordt, 2003), which is predicted as an important asset to the workplace in the future (Leaman and Bordass, 2005). This research seeks users’ opinion on improving their satisfaction, comfort and health in two environments with high and low levels of thermal control, respectively the Norwegian and British workplace contexts. Two air conditioned Norwegian cellular plan offices which provide every user with control over a window, blinds, door and the ability to adjust the temperature are compared against two naturally and mechanically ventilated British open plan offices with limited thermal control over the windows and blinds for occupants seated around the perimeter of the building. Complimentary quantitative and qualitative methodologies are applied, with a particular emphasis on grounded theory, on which basis the research plan is formulated through a process of pilot studies. Occupants’ perception of their thermal environment within the building is recorded through a questionnaire and empirical building performance through thermal measurements. These traditional techniques are further reinforced with semi-structured interviews to investigate thermal control. A visual recording technique is introduced to analyse the collected information qualitatively regarding the context and meaning. The ASHRAE Standard 55-2010 and its basis do not apply to the case study buildings in this research. This thesis suggests that thermal comfort is dynamic rather than fixed. Occupants are more likely to prefer different thermal settings at different times, which is in contrast with providing a steady thermal condition according to the standard ‘comfort zone’. Furthermore, the occupants of the Norwegian cellular plan offices in this research report up to 30% higher satisfaction, comfort and health levels compared to the British open plan offices, suggesting the impact of the availability of individual thermal control. This research suggests that rather than providing a uniform thermal condition according to the standard ‘comfort zone’, office buildings are recommended to provide a degree of flexibility to allow users to find their own comfort by adjusting their thermal environment according to their immediate requirements.
- Published
- 2014
22. The effects of country-of-origin image on consumer product involvement : a Pakistani university teachers' perspective
- Author
-
Shahzad, A.
- Subjects
658.8 - Abstract
This study aims at investigating the consumer behaviour (of University teachers) in Pakistan with reference to the effects and association of country-of-origin (COO) image on consumer product involvement. In order to have in-depth insights, the construct of COO-image is studied in terms of the country’s economic development and in a certain product category. The study explores the effects and association of the two phenomena. Furthermore, the effects of the COO-image in terms of country’s economic development and COO-image in a product category on low and high consumer product involvement are studied. Finally, the study measures the moderating effects of consumer ethnocentrism and consumer intention to adopt (in terms of innovativeness) on the effects of COO-image in a product category and COO-image in terms of economic development on low and high consumer product involvement. Due to the nature of study, a positivist approach is adopted and followed a quantitative research strategy. The data is collected using survey technique based on questionnaires. The study sample population is university academicians. 1509 university academicians from various cities in Pakistan took part in the study by completing the questionnaire. The data is then analysed using descriptive statistics, correlation analysis and regression analysis. The study establishes that highly educated and affluent Pakistani consumers are so strongly influenced by the COO-image (especially in terms of country’s economic development) that their ethnocentrism and intentions to adopt lose significance in order to contribute in shaping their attitude and behaviour related to both low and high involvement products (food and drinks, and automobiles respectively). The current study is one of few similar studies conducted in a developing country, especially Pakistan. The current study offers valuable empirical insights into the effects of the COO-image (especially with reference to a developing country perspective). The findings will be significant to the COO research as well as the businesses operating in developing countries such as Pakistan.
- Published
- 2014
23. Performance characterization of computational resources for time-constrained job execution in P2P environments
- Author
-
Awan, Malik Shahzad K.
- Subjects
004 ,QA76 Electronic computers. Computer science. Computer software ,TK Electrical engineering. Electronics Nuclear engineering - Abstract
Peer-to-peer (P2P) computing, involving the participation of thousands of general purpose, public computers, has established itself as a viable paradigm for executing looselycoupled, complex scientific applications requiring significant computational resources. The paradigm provides cheap, general-purpose computing resources with comparable computing power (FLOP/s) to an otherwise expensive supercomputer. The main characteristic of the paradigm is the volunteer participation of the general public, without any legal obligation, who dedicate their heterogeneous computational resources for advancing scientific research. The development of several middleware solutions have also furthered the application of P2P computing for solving complex scientific problems. The Berkeley Open Infrastructure for Network Computing (BOINC) is one of the most widely deployed middleware platforms in P2P systems, and has been deployed in more than 7.5 million general purpose computers for scientific computations, achieving an overall performance of 16,632.605 TeraFLOPS. ClimatePrediction.net, a large P2P project based on the BOINC middleware, involves more than 429,000 machines representing 200 different microprocessor architectures and running 21 distinct operating systems. The availability of such a large and diverse set of computational resources requires an in-depth investigation into the performance aspects of available computational resources in this dynamic P2P environment. This thesis analyses the performance data of ClimatePrediction.net primarily collected using two benchmarks, Dhrystone and Whetstone, which form part of the BOINC middleware. The results reveal a significant variation in integer and floating-point operational performance characterized by Dhrystone and Whetstone respectively for similar microprocessors, operating systems and hardware configurations. Under the BOINC environment, these performance results could be useful for: i) the selection of a suitable computing platform for executing time-constrained jobs; ii) calculating an incentive unit for rewarding project participants for their volunteer participation in large P2P projects to advance scientific research; and iii) efficient and effective utilization of available computational resources. However, the inconsistency in performance results of Dhrystone and Whetstone significantly affect their usefulness for the afore-mentioned three important applications areas, and highlight the need for reliability and consistency of performance results for obtaining maximum benefit in an uncontrolled and dynamic P2P environment. This thesis, based on the analysis of performance data of ClimatePrediction.net, identifies the key challenges associated with benchmarking in P2P environments. The thesis further suggests the design of a new light-weight P2P representative benchmark, by considering the source code of large P2P projects. The design outline of a new light-weight P2P representative benchmark – MalikStone – has been presented, whilst the results of MalikStone are compared with Dhrystone, Whetstone and CPU SPEC2006 and show its superiority in terms of consistency over both Dhrystone and Whetstone. For floating-point performance, MalikStone gave more representative results than Whetstone for Intel Corei5- 2400, Q9400, Q6600 and Pentium D processors with the standard deviation of repeated runs remaining less than 1 for each of the platforms. Similarly for integer operations, MalikStone also performed more consistently than Dhrystone with the standard deviation of repeated runs remaining less than 1 and gave more representative results for Corei5-2400, Q9400, Q6600 and Pentium D processors. In addition to the consistency in performance results, MalikStone captures broader performance characteristics by measuring floating-point, integer, bitwise-logic, string manipulation and programming construct operations. The performance results of MalikStone are further used for designing a new incentive unit – MalikCredit – for ensuring fairness in rewarding the project participants for their volunteer participation in large P2P projects to advance scientific research. MalikCredit is compared with BOINC’s existing incentive unit – Cobblestone, at three levels: 1) hourly level; 2) work-unit level; and 3) team-level; with the results showing fairness in rewards awarded using MalikCredit. This in turn is useful for retaining the existing project participants and attracting new volunteers for participating in large P2P projects, thereby, enhancing the application of P2P computing for solving scientific problems. A comparison of the credit values for the considered microprocessor architectures reveals that MalikCredit values are at least 2X more than Cobblestone values before normalization while the difference increases up to 3.3X for the fastest microprocessor, once normalization is applied to the claimed Cobblestones. The application of performance characterization done by MalikStone is further extended for scheduling computational resources by dynamically slicing the work-units keeping in view the available computational time of the resources and estimated execution time of the work-unit. The results of this new scheduling policy highlight their usefulness in maximizing the utilization of available computational resources when compared to BOINC’s traditional scheduling policies. The results have revealed that the policy improved the utilization of available computational resources by approximately 10% for the considered set of computational resources under the experimental setup considered in the case study (see Chapter 5). The findings of this thesis are envisaged to be primarily of significance to three main stakeholders: i) application developers; ii) project participants; and iii) project administrators. For application developers, the performance characterization done by MalikStone will be useful in exploiting the characteristics of underlying platforms for efficient execution, while at the same time supporting the improvement efforts for future versions of the software. The results will support project participants by informing them as to the amount of RAM, swap memory and main memory consumed during execution. The fairness in received rewards will encourage the existing project participants to continue participating in the lengthy execution of large P2P projects and will motivate the new volunteers to dedicate their computational resources to join large P2P projects. For the project administrators, the findings of this thesis will be useful in identifying suitable processor, operating system and hardware component configuration for best-case execution. In such a case the middleware might be instructed to postpone the allocation of work until a more effective architecture became available. Further, the newly proposed scheduling policy involving dynamic slicing of work-units based on the performance characterization of MalikStone could be deployed for improving the utilization of available computational resources. Finally, a few avenues of future research have been identified, which if explored could further enhance the appeal of this dynamic and uncontrolled P2P computing paradigm for cheaply solving complex and lengthy scientific problems that otherwise require enormous amount of financial cost as well as computational resources even exceeding that of traditional supercomputers.
- Published
- 2013
24. Micellar chromatographic partition coefficients and their application in predicting skin permeability
- Author
-
Shahzad, Yasser
- Subjects
615.1 ,QD Chemistry ,RS Pharmacy and materia medica - Abstract
The major goal for physicochemical screening of pharmaceuticals is to predict human drug absorption, distribution, elimination, excretion and toxicity. These are all dependent on the lipophilicity of the drug, which is expressed as a partition coefficient i.e. a measure of a drug’s preference for the lipophilic or hydrophilic phases. The most common method of determining a partition coefficient is the shake flask method using octanol and water as partitioning media. However, this system has many limitations when modeling the interaction of ionised compounds with membranes, therefore, unreliable partitioning data for many solutes has been reported. In addition to these concerns, the procedure is tedious and time consuming and requires a high level of solute and solvent purity. Micellar liquid chromatography (MLC) has been proposed as an alternative technique for measuring partition coefficients utilising surfactant aggregates, known as micelles. This thesis investigates the application of MLC in determining micelle-water partition coefficients (logPMW) of pharmaceutical compounds of varying physicochemical properties. The effect of mobile phase pH and column temperature on the partitioning of compounds was evaluated. Results revealed that partitioning of drugs solely into the micellar core was influenced by the interaction of charged and neutral species with the surface of the micelle. Furthermore, the pH of the mobile phase significantly influenced the partitioning behaviour and a good correlation of logPMW was observed with calculated distribution coefficient (logD) values. More interestingly, a significant change in partitioning was observed near the dissociation constant of each drug indicating an influence of ionised species on the association with the micelle and retention on the stationary phase. Elevated column temperatures confirmed partitioning of drugs considered in this study was enthalpically driven with a small change in the entropy of the system because of the change in the nature of hydrogen bonding. Finally, a quantitative structure property relationship was developed to evaluate biological relevance in terms of predicting skin permeability of the newly developed partition coefficient values. This study provides a better surrogate for predicting skin permeability based on an easy, fast and cheap experimental methodology, and the method holds the predictive capability for a wider population of drugs. In summary, it can be concluded that MLC has the ability to generate partition coefficient values in a shorter time with higher accuracy, and has the potential to replace the octanol-water system for pharmaceutical compounds.
- Published
- 2013
25. Inference dynamics in transcriptional regulation
- Author
-
Asif, Hafiz Muhammad Shahzad, Sanguinetti, Guido., and Williams, Chris
- Subjects
572.8 ,computational systems biology ,modelling ,gene regulation ,transcription factor proteins. ,inference algorithms ,transcriptional regulation - Abstract
Computational systems biology is an emerging area of research that focuses on understanding the holistic view of complex biological systems with the help of statistical, mathematical and computational techniques. The regulation of gene expression in gene regulatory network is a fundamental task performed by all known forms of life. In this subsystem, modelling the behaviour of the components and their interactions can provide useful biological insights. Statistical approaches for understanding biological phenomena such as gene regulation are proving to be useful for understanding the biological processes that are otherwise not comprehensible due to multitude of information and experimental difficulties. A combination of both the experimental and computational biology can potentially lead to system level understanding of biological systems. This thesis focuses on the problem of inferring the dynamics of gene regulation from the observed output of gene expression. Understanding of the dynamics of regulatory proteins in regulating the gene expression is a fundamental task in elucidating the hidden regulatory mechanisms. For this task, an initial fixed structure of the network is obtained using experimental biology techniques. Given this network structure, the proposed inference algorithms make use of the expression data to predict the latent dynamics of transcription factor proteins. The thesis starts with an introductory chapter that familiarises the reader with the physical entities in biological systems; then we present the basic framework for inference in transcriptional regulation and highlight the main features of our approach. Then we introduce the methods and techniques that we use for inference in biological networks in chapter 2; it sets the foundation for the remaining chapters of the thesis. Chapter 3 describes four well-known methods for inference in transcriptional regulation with pros and cons of each method. Main contributions of the thesis are presented in the following three chapters. Chapter 4 describes a model for inference in transcriptional regulation using state space models. We extend this method to cope with the expression data obtained from multiple independent experiments where time dynamics are not present. We believe that the time has arrived to package methods like these into customised software packages tailored for biologists for analysing the expression data. So, we developed an open-sources, platform independent implementation of this method (TFInfer) that can process expression measurements with biological replicates to predict the activities of proteins and their influence on gene expression in gene regulatory network. The proteins in the regulatory network are known to interact with one another in regulating the expression of their downstream target genes. To take this into account, we propose a novel method to infer combinatorial effect of the proteins on gene expression using a variant of factorial hidden Markov model. We describe the inference mechanism in combinatorial factorial hidden model (cFHMM) using an efficient variational Bayesian expectation maximisation algorithm. We study the performance of the proposed model using simulated data analysis and identify its limitation in different noise conditions; then we use three real expression datasets to find the extent of combinatorial transcriptional regulation present in these datasets. This constitutes chapter 5 of the thesis. In chapter 6, we focus on problem of inferring the groups of proteins that are under the influence of same external signals and thus have similar effects on their downstream targets. Main objectives for this work are two fold: firstly, identifying the clusters of proteins with similar dynamics indicate their role is specific biological mechanisms and therefore potentially useful for novel biological insights; secondly, clustering naturally leads to better estimation of the transition rates of activity profiles of the regulatory proteins. The method we propose uses Dirichlet process mixtures to cluster the latent activity profiles of regulatory proteins that are modelled as latent Markov chain of a factorial hidden Markov model; we refer to this method as DPM-FHMM. We extensively test our methods using simulated and real datasets and show that our model shows better results for inference in transcriptional regulation compared to a standard factorial hidden Markov model. In the last chapter, we present conclusions about the work presented in this thesis and propose future directions for extending this work.
- Published
- 2012
26. A quality-aware cloud selection service for computational modellers
- Author
-
Nizamani, Shahzad Ahmed, Dew, P., and Djemame, K.
- Subjects
004.67 - Abstract
This research sets out to help computational modellers, to select the most cost effective Cloud service provider. This is when they opt to use Cloud computing in preference to using the in-house High Performance Computing (HPC) facilities. A novel Quality-aware computational Cloud Selection (QAComPS) service is proposed and evaluated. This selects the best (cheapest) Cloud provider‟s service. After selection it automatically sets-up and runs the selected service. QaComPS includes an integrated ontology that makes use of OWL 2 features. The ontology provides a standard specification and a common vocabulary for describing different Cloud provider‟s services. The semantic descriptions are processed by the QaComPS Information Management service. These provider descriptions are then used by a filter and the MatchMaker to automatically select the highest ranked service that meets the user‟s requirements. A SAWSDL interface is used to transfer semantic information to/from the QAComPS Information Management service and the non semantic selection and run services. QAComPS selection service has been quantitatively evaluated for accuracy and efficiency against Quality Matchmaking Process (QMP) and Analytical Hierarchy Process (AHP). The service was also evaluated qualitatively by a group of computational modellers. The results for the evaluation were very promising and demonstrated QaComPS‟s potential to make Cloud computing more accessible and cost effective for computational modellers.
- Published
- 2012
27. Novel active sweat pores based liveness detection techniques for fingerprint biometrics
- Author
-
Memon, Shahzad Ahmed and Balachandran, W.
- Subjects
570.1 ,Fingerprint ,Biometrics ,Liveness detection ,Sensors ,Microelectrodes - Abstract
Liveness detection in automatic fingerprint identification systems (AFIS) is an issue which still prevents its use in many unsupervised security applications. In the last decade, various hardware and software solutions for the detection of liveness from fingerprints have been proposed by academic research groups. However, the proposed methods have not yet been practically implemented with existing AFIS. A large amount of research is needed before commercial AFIS can be implemented. In this research, novel active pore based liveness detection methods were proposed for AFIS. These novel methods are based on the detection of active pores on fingertip ridges, and the measurement of ionic activity in the sweat fluid that appears at the openings of active pores. The literature is critically reviewed in terms of liveness detection issues. Existing fingerprint technology, and hardware and software solutions proposed for liveness detection are also examined. A comparative study has been completed on the commercially and specifically collected fingerprint databases, and it was concluded that images in these datasets do not contained any visible evidence of liveness. They were used to test various algorithms developed for liveness detection; however, to implement proper liveness detection in fingerprint systems a new database with fine details of fingertips is needed. Therefore a new high resolution Brunel Fingerprint Biometric Database (B-FBDB) was captured and collected for this novel liveness detection research. The first proposed novel liveness detection method is a High Pass Correlation Filtering Algorithm (HCFA). This image processing algorithm has been developed in Matlab and tested on B-FBDB dataset images. The results of the HCFA algorithm have proved the idea behind the research, as they successfully demonstrated the clear possibility of liveness detection by active pore detection from high resolution images. The second novel liveness detection method is based on the experimental evidence. This method explains liveness detection by measuring the ionic activities above the sample of ionic sweat fluid. A Micro Needle Electrode (MNE) based setup was used in this experiment to measure the ionic activities. In results, 5.9 pC to 6.5 pC charges were detected with ten NME positions (50μm to 360 μm) above the surface of ionic sweat fluid. These measurements are also a proof of liveness from active fingertip pores, and this technique can be used in the future to implement liveness detection solutions. The interaction of NME and ionic fluid was modelled in COMSOL multiphysics, and the effect of electric field variations on NME was recorded at 5μm -360μm positions above the ionic fluid.
- Published
- 2012
28. The social psychology of extremism : reconceptualising extremism through global perceptions
- Author
-
Shafqat, Shahzad
- Subjects
150 ,Extremists--Psychology - Published
- 2011
29. Slow invariant manifold and its approximations in kinetics of catalytic reactions
- Author
-
Shahzad, Muhammad, Gorban, Alexander, and Tyukin, Ivan
- Subjects
541 - Abstract
Equations of chemical kinetics typically include several distinct time scales. There exist many methods which allow to exclude fast variables and reduce equations to the slow manifold. In this thesis, we start by studying the background of the quasi equilibrium approximation, main approaches to this approximation, its consequences and other related topics. We present the general formalism of the quasi equilibrium (QE) approximation with the proof of the persistence of entropy production in the QE approximation. We demonstrate how to apply this formalism to chemical kinetics and describe the difference between QE and quasi steady state (QSS) approximations. In 1913 Michaelis and Menten used the QE assumption that all intermediate complexes are in fast equilibrium with free substrates and enzymes. Similar approach was developed by Stuekelberg (1952) for the Boltzmann kinetics. Following them, we combine the QE (fast equilibria) and the QSS (small amounts) approaches and study the general kinetics with fast intermediates present in small amounts. We prove the representation of the rate of an elementary reaction as a product of the Boltzmann factor (purely thermodynamic) and the kinetic factor, and find the basic relations between kinetic factors. In the practice of modeling, a kinetic model may initially not respect thermodynamic conditions. For these cases, we solved a problem: is it possible to deform (linearly) the entropy and provide agreement with the given kinetic model and deformed thermodynamics ? We demonstrate how to modify the QE approximation for stiffness removal in an example of the CO oxidation on Pt. QSSA was applied in order to get an approximation to the One dimensional Invariant Grid for oxidation of CO over Pt. The method of intrinsic low dimension manifold (ILDM) was implemented over the same example (CO oxidation on Pt) in order to automate the process of reduction and provide more accurate simplified mechanism (for one-dimension), yet at the cost of a significantly more complicated implementation.
- Published
- 2011
30. New optimization methods in predictive control
- Author
-
Shahzad, Amir, Kerrigan, Eric, and Constantinides, George A.
- Subjects
003.5 - Abstract
This thesis is mainly concerned with the efficient solution of a linear discrete-time finite horizon optimal control problem (FHOCP) with quadratic cost and linear constraints on the states and inputs. In predictive control, such a FHOCP needs to be solved online at each sampling instant. In order to solve such a FHOCP, it is necessary to solve a quadratic programming (QP) problem. Interior point methods (IPMs) have proven to be an efficient way of solving quadratic programming problems. A linear system of equations needs to be solved in each iteration of an IPM. The ill-conditioning of this linear system in the later iterations of the IPM prevents the use of an iterative method in solving the linear system due to a very slow rate of convergence; in some cases the solution never reaches the desired accuracy. A new well-conditioned IPM, which increases the rate of convergence of the iterative method is proposed. The computational advantage is obtained by the use of an inexact Newton method along with the use of novel preconditioners. A new warm-start strategy is also presented to solve a QP with an interior-point method whose data is slightly perturbed from the previous QP. The effectiveness of this warm-start strategy is demonstrated on a number of available online benchmark problems. Numerical results indicate that the proposed technique depends upon the size of perturbation and it leads to a reduction of 30-74% in floating point operations compared to a cold-start interior point method. Following the main theme of this thesis, which is to improve the computational efficiency of an algorithm, an efficient algorithm for solving the coupled Sylvester equation that arises in converting a system of linear differential-algebraic equations (DAEs) to ordinary differential equations is also presented. A significant computational advantage is obtained by exploiting the structure of the involved matrices. The proposed algorithm removes the need to solve a standard Sylvester equation or to invert a matrix. The improved performance of this new method over existing techniques is demonstrated by comparing the number of floating-point operations and via numerical examples.
- Published
- 2010
- Full Text
- View/download PDF
31. Novel selenium-mediated cyclisations
- Author
-
Shahzad, Sohail Anjum
- Subjects
547 - Abstract
The present work describes the selenium-mediated cyclofunctionalisations of alkenes. Three different areas are reported herein. Chapter 2 reports syntheses of several substrates for carbocyclisation reactions and use of selenium and Lewis acids resulting in various dihydronaphthalenes. These dihydronaphthalenes then acted as substrates for second ring forming reactions. This novel tandem double cyclisation comprises a carboannulation, a Friedel-Crafts reaction and a rearrangement. This cascade sequence has been proven to be a useful tool in the selective synthesis of dihydronaphthalenes and benzofluorenes from easily accessible stilbenes and provides fast access to polycyclic ring systems in a single step. Chapter 3 describes electrophilic selenium-mediated reactions which have been used to cyclise a range of /-keto esters to corresponding biaryl compounds under very mild conditions. The products were formed by a carboannulation via addition/elimination sequence and a subsequent rearrangement of range of alkyl and aryl groups. The key starting materials stilbene /-keto esters were readily prepared by Heck coupling and hydrolysis followed by condensation with potassium ethyl malonate. Chapter 4 describes work on catalytic selenium reagents with stoichiometric amount of hypervalent iodine to convert a range of stilbene carboxylic acids into their corresponding isocoumarins. The work also describes the selective synthesis of dihydroisocoumarins using diphenyl disulfide and dimethyl diselenide.
- Published
- 2010
32. Impact and fatigue properties of natural fibre composites
- Author
-
Shahzad, Asim
- Subjects
620.1 - Published
- 2009
33. An integrated approach to planning of recycling activites for the waste from electrical and electronic equipment
- Author
-
Abu Bakar, Muhammad Shahzad
- Subjects
628 ,Mechanical Engineering not elsewhere classified - Abstract
This thesis reports on the research undertaken to improve the end-of-life management of Waste from Electrical and Electronic Equipment (WEEE) through the generation of bespoke recycling process plans for various electrical and electronic products. The principle objective of this research is to develop an integrated framework to incorporate the related product, process and legislative information during the end-of-life management to promote sustainable practices of processing of such waste. The research contributions are divided into three major parts. The first part reviews the relevant literature in the areas of environmental concerns related to the electrical and electronic recovery sector and end-of-life product recovery decision support tools. The second part investigates the 'Recycling Process Planning' framework which incorporates product evaluation, legislative compliance monitoring, and an ecological and economical assessmentto generate bespoke eco-efficient recycling process plans for recovery and recycling of electrical and electronic equipment The third part includes the design and implementation of a novel computer aided recycling process planner that demonstrates the application of recycling process planning framework and the associated ecological and economical assessment methodology to identify the most appropriate end-of-life options for WEEE. The validity of the research concept has been demonstrated via three case studies. The results from these case studies have highlighted the impact that the proposed recycling process planning framework could have in identifying many improvements which could be made in current recovery and recycling practices through adoption of a systematical approach for generation of recycling process plans based on the most up-to-date information and knowledge on recycling processes. In summary, this research has generated practical and powerful models and tools to improve the environmental and economical performance of WEEE recycling and to provide invaluable support for long term sustainability of the electrical and electronic recovery sector.
- Published
- 2008
34. Negation and antonymy in sentiment classification
- Author
-
Khan, Shahzad
- Subjects
004 - Published
- 2008
35. Chemically tuning dynamic networks and supramolecular assemblies to enable synthetic extracellular matrices for tissue engineering
- Author
-
Hafeez, Shahzad and Hafeez, Shahzad
- Published
- 2023
36. An Examination of Trauma-Informed Medical Education in the Emergency Medicine Clerkship: Opportunities for Learner-Centered Curricular Development
- Author
-
Shahzad, Ahmed Taha and Shahzad, Ahmed Taha
- Published
- 2023
37. Essays on the making of a market : resources, technologies and social construction : insights from mobile communications
- Author
-
Ansari, Shahzad Mumtaz
- Subjects
658 - Published
- 2004
38. The effect of process conditions on aggregation during precipitation of calcium oxalate monohydrate (COM)
- Author
-
Mumtaz, Hassan Shahzad
- Subjects
660 - Published
- 1999
39. Vibration design by means of structural modification
- Author
-
Akbar, Shahzad
- Subjects
621.8 ,Eigenvalve ,Hopfield neural network - Published
- 1998
40. Financing small businesses : a comparative study of Pakistani-immigrant businesses and UK-indigenous businesses in the travel trade
- Author
-
Yousuf, Shahzad, Harper, M., and Brewster, Chris
- Subjects
658 ,Capital structure ,Networks ,Family businesses - Abstract
This research is about financing practices of Pakistani-immigrant and indigenous-owned small travel agents. The study provides an understanding of the capital structures of businesses owned by both groups and compares these to draw similarities and differences between both groups. The research integrates the 'ethnic enclave' immigrant theory, the capital structure theory in particular the Pecking Order Hypothesis, the role of 'networks' in business financing, and the business life-cycle theories. The research question and the research hypotheses emerged from the literature reviewed. Ten case studies, five Pakistani businesses and five indigenous businesses, confirmed the hypotheses which formed the basis of a survey of a large sample of sixty businesses, thirty in each group. The case study data is considered invaluable since it provided the real evidence of the sensitive nature of financial information in these businesses. The methodology adopted was a combination of qualitative and quantitative approaches. The findings of the study show that there are more similarities than differences among the capital structures of both groups of businesses. The nuclear family plays a crucial role throughout the life-cycle of the business in both groups. The role of family labour is not as prominent as among other industries such as Confectionery, Tobacconists, and Newsagents (CTN's). Informal sources of finance are preferred over formal sources by both groups of businesses due to their availability and lower cost. The Pecking Order Hypothesis theory applies to both groups of businesses. The main sources of formal finance were high street banks, bank overdrafts and loans. Pakistani businesses were not disadvantaged in any way by the formal providers of finance. This research is the first to report on the comparative capital structures among both groups of businesses. However, although considerable contribution has been made by this research to the small business finance literature further research should be conducted into the area.
- Published
- 1997
41. Modelling the hydrological impacts of land cover change in the Siran Basin, Pakistan
- Author
-
Jehangir, Shahzad
- Subjects
550 - Abstract
Many forested catchments in northern Pakistan have undergone land cover change during the last few decades. Extreme floods and extended droughts observed in these areas have lead to the question: How do human influences affect the water balance of a montane catchment. The underlying socio-political factors that have lead to the changes in forest cover and catchment hydrology are well documented, but there have been very few efforts to spatially correlate the cover changes with the catchment water balance. A deterministic model based on high resolution spatial and temporal data offers the ability to simulate the hydrological impacts of changes in land cover in a spatial context. In an attempt to assess the impacts of changing forest covers on individual hydrological processes, a GIS-based model Siran_HYDMAPS has been developed for the Siran Basin, Pakistan. This model integrates the spatial databases with the well-known hydrological process algorithms (e.g. Penman-Monteith evpotranspiration and Green-Ampt infiltration models). Spatially distributed static (topographic and soil) parameters for this model are extracted from a regional GIS developed specifically for the project. The dynamic (vegetation-related) parameters are estimated from the land cover maps, derived by digital processing of multi-resolution, multi-temporal Landsat MSS (5.3.1979) and TM (10.7.1989). Relative relief and shadowing in rugged terrain of the Himalayan foothills, that cause major problems in image processing, have been given particular attention. A rule-based approach was adopted to refine land cover maps with the integration of GIS for mapping the level II forest classes. Mapping of forest cover changes was carried out by post-classification change detection techniques. The Siran_HYDMAPS predicts a decrease in radiation balance and interception capacity, and an increase in evapotranspiration and catchment response of the Siran Basin, as a result of land cover changes. It was concluded that the water imbalances in this catchment, observed during the last two decades, were caused by the integrated effects of land cover changes and climatic factors.
- Published
- 1995
42. The physiology of potassium during exercise and recovery
- Author
-
Qayyum, Mohammed Shahzad
- Subjects
612 ,Skeletal muscle - Published
- 1994
43. Reconfigurable flight control systems for a generic fighter aircraft
- Author
-
Aslam-Mir, Shahzad
- Subjects
629.135 ,Aircraft flight control & aircraft instrumentation - Published
- 1992
44. An investigation into the microstructures and their relationships to the properties of centrifugally-cast, bronze, #plain' bearings
- Author
-
Alam, Shahzad
- Subjects
669 ,Metallurgy & metallography - Published
- 1991
45. Physics MCQs for the Part 1 FRCR
- Author
-
Ilyas, Shahzad, Matys, Tomasz, Sheikh-Bahaei, Nasim, Yamamoto, Adam K., and Graves, Martin J.
- Published
- 2011
- Full Text
- View/download PDF
46. Extracorporeal membrane oxygenation in patients with severe respiratory failure from COVID-19.
- Author
-
Shaefi, Shahzad and Shaefi, Shahzad
- Abstract
PURPOSE: Limited data are available on venovenous extracorporeal membrane oxygenation (ECMO) in patients with severe hypoxemic respiratory failure from coronavirus disease 2019 (COVID-19). METHODS: We examined the clinical features and outcomes of 190 patients treated with ECMO within 14 days of ICU admission, using data from a multicenter cohort study of 5122 critically ill adults with COVID-19 admitted to 68 hospitals across the United States. To estimate the effect of ECMO on mortality, we emulated a target trial of ECMO receipt versus no ECMO receipt within 7 days of ICU admission among mechanically ventilated patients with severe hypoxemia (PaO2/FiO2 < 100). Patients were followed until hospital discharge, death, or a minimum of 60 days. We adjusted for confounding using a multivariable Cox model. RESULTS: Among the 190 patients treated with ECMO, the median age was 49 years (IQR 41-58), 137 (72.1%) were men, and the median PaO2/FiO2 prior to ECMO initiation was 72 (IQR 61-90). At 60 days, 63 patients (33.2%) had died, 94 (49.5%) were discharged, and 33 (17.4%) remained hospitalized. Among the 1297 patients eligible for the target trial emulation, 45 of the 130 (34.6%) who received ECMO died, and 553 of the 1167 (47.4%) who did not receive ECMO died. In the primary analysis, patients who received ECMO had lower mortality than those who did not (HR 0.55; 95% CI 0.41-0.74). Results were similar in a secondary analysis limited to patients with PaO2/FiO2 < 80 (HR 0.55; 95% CI 0.40-0.77). CONCLUSION: In select patients with severe respiratory failure from COVID-19, ECMO may reduce mortality.
- Published
- 2021
47. Current challenges in cardiac rehabilitation: strategies to overcome social factors and attendance barriers.
- Author
-
Chindhy, Shahzad and Chindhy, Shahzad
- Abstract
IntroductionCardiac rehabilitation (CR) significantly reduces secondary cardiovascular events and mortality and is a class 1A recommendation by the American Heart Association (AHA) and American College of Cardiology (ACC). However, it remains an underutilized intervention and many eligible patients fail to enroll or complete CR programs. The aim of this review is to identify barriers to CR attendance and discuss strategies to overcome them.Areas coveredSpecific barriers to CR attendance and participation will be reviewed. This will be followed by a discussion of solutions/strategies to help overcome these barriers with a particular focus on home-based CR (HBCR).Expert opinionHBCR alone or in combination with center-based CR (CBCR) can help overcome many barriers to traditional CBCR participation, such as schedule flexibility, time commitment, travel distance, cost, and patient preference. Using remote coaching with indirect exercise supervision, HBCR has been shown to have comparable benefits to CBCR. At this time, however, funding remains the main barrier to universal incorporation of HBCR into health systems, necessitating the need for additional cost benefit analysis and outcome studies. Ultimately, the choice for HBCR should be based on patient preference and availability of resources.
- Published
- 2020
48. An improved procedure for isolation of high-quality RNA from nematode-infected Arabidopsis roots through laser capture microdissection.
- Author
-
Anjam, Muhammad Shahzad and Anjam, Muhammad Shahzad
- Abstract
BackgroundCyst nematodes are biotrophs that form specialized feeding structures in the roots of host plants, which consist of a syncytial fusion of hypertrophied cells. The formation of syncytium is accompanied by profound transcriptional changes and active metabolism in infected tissues. The challenge in gene expression studies for syncytium has always been the isolation of pure syncytial material and subsequent extraction of intact RNA. Root fragments containing syncytium had been used for microarray analyses. However, the inclusion of neighbouring cells dilutes the syncytium-specific mRNA population. Micro-sectioning coupled with laser capture microdissection (LCM) offers an opportunity for the isolation of feeding sites from heterogeneous cell populations. But recovery of intact RNA from syncytium dissected by LCM is complicated due to extended steps of fixation, tissue preparation, embedding and sectioning.ResultsIn the present study, we have optimized the procedure of sample preparation for LCM to isolate high quality of RNA from cyst nematode induced syncytia in Arabidopsis roots which can be used for transcriptomic studies. We investigated the effect of various sucrose concentrations as cryoprotectant on RNA quality and morphology of syncytial sections. We also compared various types of microscopic slides for strong adherence of sections while removing embedding material.ConclusionThe use of optimal sucrose concentrations as cryoprotection plays a key role in RNA stability and morphology of sections. Treatment with higher sucrose concentrations minimizes the risk of RNA degradation, whereas longer incubation times help maintaining the morphology of tissue sections. Our method allows isolating high-quality RNA from nematode feeding sites that is suitable for downstream applications such as microarray experiments.
- Published
- 2016
49. Translational Medicine: Tools And Techniques
- Author
-
Shahzad, Aamir and Shahzad, Aamir
- Abstract
Translational Medicine: Tools and Techniques provides a standardized path from basic research to the clinic and brings together various policy and practice issues to simplify the broad interdisciplinary field. With discussions from academic and industry leaders at international institutions who have successfully implemented translational medicine techniques and tools in various settings, readers will be guided through implementation strategies relevant to their own needs and institutions. The book also addresses regulatory processes in USA, EU, Japan and China. By providing details on omics sciences techniques, biomarkers, data mining and management approaches, case reports from industry, and tools to assess the value of different technologies and techniques, this book is the first to provide a user-friendly go-to guide for key opinion leaders (KOLs), industry administrators, faculty members, clinicians, researchers, and students interested in translational medicine. ? Includes detailed and standardized information about the techniques and tools used in translational medicine ? Provides specific industry case scenarios ? Explains how to use translational medicine tools and techniques to plan and improve infrastructures and capabilities while reducing cost and optimizing resources
- Published
- 2015
50. Biological significance of HORMA domain containing protein 1 (HORMAD1) in epithelial ovarian carcinoma.
- Author
-
Shahzad, Mian MK and Shahzad, Mian MK
- Abstract
The present study was undertaken to determine the expression and biological significance of HORMAD1 in human epithelial ovarian carcinoma. We found that a substantial proportion of human epithelial ovarian cancers expressed HORMAD1. In vitro, HORMAD1 siRNA enhanced docetaxel induced apoptosis and substantially reduced the invasive and migratory potential of ovarian cancer cells (2774). In vivo, HORMAD1 siRNA-DOPC treatment resulted in reduced tumor weight, which was further enhanced in combination with cisplatin. HORMAD1 gene silencing resulted in significantly reduced VEGF protein levels and microvessel density compared to controls. Our data suggest that HORMAD1 may be an important therapeutic target.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.