CSC71001 Programming I Assessment 2 Paper

[ad_1]

CSC71001
Programming I
Assessment 2
Details:
Title: Practical Skills
Weighting: 15% of unit grading
Due: Monday Week 7 (16th December 2019 at 11PM)
Specifications
Overview:
Your task is to create a game in Greenfoot, with three types of elements: a PLAYER object, controlled by the player; FOOD objects that can be ‘caught’ by the player; and ENEMY objects, that can ‘catch’ the player. If the enemy catches the player, then the game is over.
You must create a new scenario and you must choose a theme for your game that is not crabs/worms and lobsters. All your elements should suit your theme, including the background and the actors. The movement of the actors should ‘make sense’ as per the theme of your game. We do not expect to see the same theme or game created by any two students – be original!
Details:
Player:
· You will create a Player class.
· At the beginning of the game, there must be one PLAYER object on the screen.
· The PLAYER must be controlled by the keyboard, and at minimum must move automatically and have left and right turn. For example, when the left arrow key on the keyboard is pressed, the PLAYER will turn to the left while moving forwards. When the right arrow key is pressed, the PLAYER will turn to the right while moving forward.
Food
· You will create one Food class.
· There must be eight FOOD objects on the screen at the beginning of the game.
· Each FOOD object must have random movement on the screen – that is, it must turn in random directions and move at a random speed. The food must be able to be caught by the PLAYER.
· When the FOOD is caught by the PLAYER, it should be removed from the screen.
· Later in the Portfolio, you will use the Food class to create different types of Food objects so you will need to think about the theme for your food carefully.

Enemy:
· You will create an Enemy class.
· There must be at least one ENEMY object on screen at the beginning of the game.
· Each ENEMY must at minimum, move at a constant speed and turn in random directions and should be different than the food (cannot use the exact same code).
· If the ENEMY catches the player, the game should end. Sound:
· You should include sound effects that fit the theme of your game. These can be either in¬built or created by you. At least one sound should be created/recorded by you in Greenfoot.
· You should include sound for when the PLAYER is caught by the ENEMY.
· You should include sound for when the PLAYER catches or eats a FOOD.
· You should include sound for when the PLAYER wins the game.
Additional Details:
You can choose to use the inbuilt media for backgrounds and actors OR you can choose to add your own (see Module 4 for how to do this), or some combination of the two. If you do add your own, make sure you use PNGs with transparency for your actors, and keep your file size small.
General criteria: playability, accuracy, careful coding, maintainability of the code, commenting, choice of names for classes, methods (and variables if necessary).
Enhancements for extra credit:
You may like to add the following features, for extra credit:
· Use alternate keys to move the player “up”, “down”, “left” and “right”.
· Add animation when the PLAYER is moving.
· Add animation when the FOOD is moving.
· Add animation when the ENEMY is moving.
· Use your own images or images sourced from the internet. These must be referenced in your documentation and commented in your code.
· Use your own sounds or sounds sourced from the internet. These must be referenced in your documentation and commented in your code.
· Add a score which displays how many FOOD pieces they have caught.

Submission:
You must export your game as both:
· a JAR file (application); and
· a Greenfoot archive (gfar) file. Please name your file appropriately (see below):
e.g. yourSCUusername_Ass2.jar and yourSCUusername _Ass2.gfar and yourUsername_Enhancements.doc
If you do not submit in the above format, your assignment will not be marked. Submit both of these to MySCU site under the “Assignment 2” link. Make sure you Submit (not just Save) by the due date. Your tutor will contact you if they have any questions about your submission.
Getting Help
Who can you get help from? Use this diagram to determine from whom you may seek help with your program.
Lecturer Tutors Online
Forums Relatives

Students outside unit Hired
coders Classmates Private Tutors
Other
Encouraged Attribution Required Ask tutor Not acceptable
This assignment, which is to be completed individually, is your chance to gain an understanding of the fundamental concepts of object-oriented programming and coding syntax on which later
learning will be based. It is important that you master these concepts yourself.
Since you are mastering fundamental skills, you are permitted to work from the examples in the study guide or textbook, but you must acknowledge assistance from other textbooks or classmates. In particular, you must not use online material or help from others, as this would prevent you from mastering these concepts.

The post CSC71001 Programming I Assessment 2 Paper appeared first on mynursinghomeworks.

[ad_2]

Source link

Research Proposal for Investment and International Business

[ad_1]

Foreign Direct Investment or as generally known as FDI is defined by many authors and institutions. As defined by (UNCTAD, 1993), “FDI refers to an investment made to acquire lasting interest in enterprises operating outside of the economy of the investor”, one of the most renowned and usually followed definition of FDI is that of IMF. (IMF, 2003) defines FDI as “A category of international investment that reflects the objective of the resident in one economy (known as direct investor) obtaining a lasting interest in an enterprise resident in another economy (known as the direct investment) where the investment is treated as direct investment when the direct investor has obtained 10 percent or more of the ordinary shares of that entity.

Understanding the importance of FDI in the global trade environment is very essential. The global FDI rose to $916 billion in 2005 an absolute 29 percent increase as compared to 2004 (World Investment Report, 2006). The top contributing TNC’s in the global FDI flows were General Electric, Vodafone and Ford motors as they had about $877 billion of foreign assets which was approximately 19% of the total foreign assets held by the top hundred TNC’s. In the developing world, Hutchison Whampoa (Hong Kong, China) kept the leading position with its foreign assets of $68 billion which accounted to around 17% of the total foreign assets held by the top 100 TNC’s from the developing world.

The existing literature shows that the FDI trends have significantly changed in the past few years and that this significant change is also having an impact on the developing and emerging markets from the developing countries.

Choice of Topic:

This project proposes to analyze the performance of India- an emerging economy in the post liberalisation period and by finding and scrutinizing the key sectors and determinants for Foreign Direct Investment (FDI) in the Indian market.

Proposed Title:

“FDI and Economic growth- India ‘1991-2008′ ”.

Aim of the Research:

During the past few years, there has been a significant transformation in the policies and approaches towards Foreign Direct Investment (FDI) on most of the developing countries. For instance the BRIC (Brazil, Russia, India & China) are attracting many foreign multinationals to invest in their economy. According to Salvotore, D (2007), FDI usually is long term and is regarded to be stabilizing the host country. The developing economies have realized the importance and advantage of FDI in boosting the industrialization and encouraging economic growth. The highly regulated FDI regime was liberalised and newer reforms in FDI policies were introduced in 2003. The economy also started boosting after the post liberalisation period i.e. 1991. Paula (2008) states that the relaxations in the FDI regulations in 2003 by the government of India have been a significant factor in augmenting the inflow of FDI in the Indian economy. Since 2003, enormous amount of FDI has come in India through multinational and international firms and various other foreign investors which led to the growth of the country.

Therefore the proposed aim of my dissertation is to find out the determinants for the inflow of FDI in India and find out the key sectors which are most attractive for foreign investment in the Indian market.

Research Objectives:

The main objective of this research is to examine and assess the pros and cons of FDI and the factors responsible to the growth of the Indian economy and this will be achieved through:

Analyzing economic performance of India from 1991 to 2008.

Identification of whether FDI, directly or indirectly contributes to the economic development of the emerging economy like India.

Examination of the determinants of FDI in India

Investigation of key sectors for FDI in Indian market.

Determination of appropriate mode of entry for FDI inflow in the Indian market with reference to the market conditions.

Rationale behind choice of subject for this research:

Economics has been one of the beloved subjects of the author of this research. After completion of his graduation in Commerce with specialization in Tax, Auditing and Economics, the author has worked closely with a research analyst in a private firm called Bio-Tech Envirocare Private Ltd. in India. While working with this firm, he has gained many opportunities to keep a track of foreign direct investments in Indian economy with respect to the Pharmaceutical industry as the firm had its core business operations in that industry. But this has encouraged the author to take up this research on a broader aspect and take into consideration the Indian economy as whole and explore its core potential, capacity and capability. And due to this the author has been able to contact some professional research experts and get their precise views on the development of FDI in India.

Research Design:

According to Bryman and Bell (2007), research design is a methodical plan which directs the proposed research. It’s a blueprint of an entire research.

My Research plan includes:

Collection of Data

Analysis of Data

Interpretation

Findings and Conclusion

Literature Review:

I.A Moosa (2002) describes Foreign Direct Investment as the process where residents of one country (known as the source country) acquires ownership of assets for the purpose of controlling production, distribution and other activities of a firm in another country (known as the host country). FDI recently has been one of the major elements contributing in the global economy. Sinha (2008) states that some of the main reasons for FDI in developing economies is shortage of domestic capital due to limitation on internal savings, insufficiency of public funds and technology gap between the developing and developed countries. All this creates demand for huge external capital for undertaking huge projects and indirectly promoting growth. The importance of FDI in developing countries is a post 1991 scenario where the importance of BRIC economies was discovered. Indian economy being one from the BRIC economies and also amongst the world’s fastest growing economies is expected to play a key role in the future world economy. The October 2003 edition of the Global Economic Paper of Goldman Sachs has an article “Dreaming with the BRICs- The Path to 2050” which clearly states the importance of India as compared to Brazil, China and Russia. India’s trade and investment complete picture was basically divided into the pre-1991 period and the post-1991 period. The pre-1991 period in the Indian Economy showed immense restrictions in the policies for foreign investment. The involvement of foreign entities was categorized into financial participation and technical participation and this was heavily monitored as to what is the intention of the company who wanted to invest in a local firm. Sectors were specially specified as to which are open for technical participation and which for financial participation or both. Technical collaborations were given more importance as this involved introduction of new technology and know-how by the foreign firms and at the same time restricted the foreign control over the local firms. The Foreign Exchange Regulation Act, 1973 (FERA) had been introduced which gave very high preference to the Indian companies and many foreign multinationals like Coca Cola had no other way than to close down their operations in India as foreign firms were asked to reduce their equity holdings in Indian companies to less than forty percent. During the period of 1959 to 1979 the total of the foreign investment approved by the government was dropped down to $70 million which resulted in negative net inflow (Kumar 1994). The pre-1991 period laid a platform for the FDI as the industrialization policies, economic atmosphere and immense human capital attracted more FDI towards India. And then came the post-1991 period. The post-1991 period saw many relaxations in the FDI policies and licensing act was also positively amended with many relaxations. Many sectors of the economy were opened to automatic approval of more stakes in the company. Many recent positive steps have been taken by the Indian government to facilitate the increase in the FDI in many sectors. FDI in India is controlled and regulated under the Foreign Exchange management Act, 1999 by the Reserve Bank of India. The entry mode for FDI in India is predefined through joint ventures and collaborations and investments in a local company by a foreign entity. FDI involvement was automatically approved for equity up to hundred percent was legitimate for sectors like distribution, communication and electricity for the investments which didn’t exceed Rs.15 billion. After 2002, greater importance for FDI was given as a new sub-section was created in the Indian industry which specially looked after the FDI needs and its improvement. In 2004, a new committee was formed which headed by Rattan Tata, where this committee made frequent visits to the industrial places in India where there was a absolute need of investment and held meetings with large overseas companies to bring those places in light.

Methodology:

To achieve the stated research objectives,

The author will mostly be relying on secondary data which includes text books by various authors, online journals and published articles, featured articles on the Financial Institutional websites, articles from database websites like Science Direct and Emerald and also with help of Google search engine. Most of the statistical data will be accessed from the official websites like Reserve Bank of India, OECD and International Monetary Fund (IMF), The World Bank and past UN annual reports.

The basic strategy used in the research will be methodological triangulation which involves both qualitative and quantitative methods of analysis. The author of this research is also partly relying on the primary sources as he will be interviewing several research analysts from India, United Kingdom and the European Union by sending them a questionnaire with 9-10 questions and on receiving their answers qualitative analysis will be done so as to get a actual trend of the determinants of FDI and its effects. Also video conferencing will be involved with a research analyst from India with the help of Skype.

So as to the design of the research is concerned, the type of research selected by the author is Retrospective study i.e. trend studies which looks back in time for the happening of past events and find a particular trend in it. Probability and Non-Probability sampling method will be used to give a proper approach to each and every step taken in this research. All the data will be categorized in these two categories and then a trend will be extracted from them.

Contribution of the Study:

This is an extended research on many previous researches in the same context but this research specifically covers the whole Indian economy on a broader view rather than concentrating on a single sector only as this does not give an actual trend of the country’s performance, attractiveness for FDI and further potential for sustainability.

The existing literature shows that there are limited studies that have actually concentrated and researched from India’s point of view. And hence the author of this research has chosen to have an extended research on this topic.

Hence this study will fill the gap in the literature by analyzing with a broader view of FDI in India as compared to the narrow approach of FDI in India as this research will cover the Indian Economy on a whole and not a specific sector only.

Time Schedule:

A time schedule is very important in a project so as to properly align the responsibilities and to impose restrictions on the author of this project for the timely completion of this project. The time scale for this research is set as follows:

June-July (2010)

Selection of topic

Literature Review

Approval from Tutor

July-August (2010)

Collection of Data

First draft of proposal

Break

August-September (2010)

Proposal approval from tutor

Changes in the proposal (if any)

Re-approval from tutor (if necessary)

September-October (2010)

First draft of actual dissertation

Approval from tutor

Changes in the first draft (if any)

Re-approval from tutor (if necessary)

Final draft of dissertation

Final approval from tutor

Submission of Project

A detailed Gantt chart will be prepared for the whole of the project with the inclusion of all the minor details.

The post Research Proposal for Investment and International Business appeared first on mynursinghomeworks.

[ad_2]

Source link

Examination of the UK Stock Market

[ad_1]

There has been much academic discourse on fund managers stock valuation and recommendations using style investment strategies. Though there is weighty empirical evidence to suggest that value stocks outperforms growth stocks using different commonly used valuation indicators in international equities

The above result is also consistent with results observed in the UK market. Relevant literature discussed in this proposal shows that when ranked according to price-to-earnings, price-to-book, price-to-cash-flow and dividend yield, Value shares outperformed growth shares in UK market. There are divergent views in explaining the rationale for this result with some authors stating that value premium is as a result of its high risk while others believe a contrarian approach. The use of another valuation indicator the PEG ratio by analysts in stock recommendation is becoming popular. Despite the PEG ratio returning better performance when compared with P/E ratio. A comprehensive study of its use in classifying investment style has not been undertaken, especially in the UK market.

The focus of this proposal is that this research will examine the performance of value and growth stocks in the UK market and especially the introduction of PEG ratio in the study to widen the knowledge in understanding the rationale for the outcome of the results of earlier studies. Data from 1992 to 2007 will be used as the quantitative research technique to buttress this research

1.0 Introduction

One belief that is held in the investment world is the adoption of investment styles and the resultant performance of asset returns based on those styles. This belief according to Donald, 2008 “comprises of all that we know or think that we know about the ways asset returns are generated”. The commonly used investment style used by fund managers and investors is the value and growth styles.

Many empirical studies on this form of investment style have suggested that over the years, value shares outperforms growth shares irrespective of the valuation tool used fama and French (1992) and Lakonishok,Shiefer and Vishny (1994).In those studies the most commonly used valuation tools are the book to market and earnings to price ratios.

More recently fund managers are beginning to adopt another valuation tool the PEG ratio which adjusts the earning price ratio by its growth. The use of PEG ratio adjusts for one of the flaws of using P/E ratio in explaining difference in two comparable companies which is its growth rate Estrada (2005).Studies on this new valuation indicator is quiet scarce in measuring the performance of value shares against growth shares. Examining the outperformance of value shares over growth shares in UK becomes imperative with the use of this new valuation indicator and updated data that will be under study during this research.

Background study- London Stock Exchange

Due to the central geographical location of London with the other world timing zones making reference to it, London has assumed the financial centre of the world.

The London Stock exchange establish around 1700 has played a dominant role in the shaping the security market both domestically and intentionally Michie (1999). The capital market in London stock exchange according to Blake (2000) comprises of the FTSE 100, made of up the top 100 companies by market capitalization, FSTE 250,350 and ALLSHARE Index, each index comprises of the top total number of companies by market capitalization with the Allshare index making up the entire market.

The performance of the Uk market is consistent with the trends in other developed market when sorted by price to earnings.Fama and French 1998 found that annual returns of value stocks in UK were 17.46 while that of growth stocks were 14.81 in a data spanning between 1975-95.Before then using data from London Share Price database, showed that value shares(Lowest price/earnings ratio),had an annual of 17.76% while Growth shares (Highest price/earnings ratio) had an annual return of 10.80 in a data collected between 1961 to 1985 Tweedy,Browne,2008.Gregory,Harris and Michou (2001) updated the data from 1975-1998 and found that the average annual return over five years of portfolios sorted by earnings to price ratio show that the lowest rank deciles portfolio the (value)has a return of 24.62% while highest rank deciles portfolio (Growth)has a return of 20.64%.

Research Rationale.

The Uk Stock market has not been extensively studied in terms of the value and growth investment performance unlike the US. Garry, Harris and Michiou studies of 1975-1998 remains the most comprehensive study of Uk performance of value shares over growth share in Uk Xinzhong (2001).Moreover, to the best of my knowledge, previous studies have not use the PEG ratio as a valuation indicator to examine if the returns will be consistent with returns when measured with earnings price ratio.

1.3 Research Aim

The purpose of this research is to compare the performance of value and growth shares using both P/E and PEG valuation tools in London Stock Exchange

1.4 Research Objectives

To examine the performance of value stocks and growth stocks in the UK stock market between 1992-2007,using companies in FTSE 350(FTSE 350 is made up of 94% of total market by value).Which will be enough to describe market performance.

To examine if using the PEG ratio order than the P/E ratio used in other studies to classify the investment style, will give consistent explanation in the outperformance of Value shares over Growth shares.

To examine if the performance of Value shares and growth shares using current data sources will give a consistent result with earlier studies.

To establish using statistical analysis the significance of the result from the performance

2.0 Literature Review

This section would look at the views and writings of other researchers in the area of value and growth investment styles, and the usefulness of using either the P/E ratio or the PEG ratio in classifying the two investment styles in other markets. It would also show the contributions that have been made in this subject area particularly in the London Stock Exchange.

2.1 Value Vs Growth Investment style

Most investors in the equity market usually adopt style strategies in their investment decisions. This notion as explain earlier in this paper is termed out of the belief that has characterized most investment decisions, though most often, they are empirical evidence to support such beliefs. One of such belief is that of value shares and growth shares, empirical evidence in most markets studied so far shows that value shares outperform growth shares irrespective of the valuation indicator used. Using the mostly commonly used valuation indicators in classifying shares, value shares are generally defined as shares that have low earnings-to-price ratio in Basu 1977, low book-to-market ratio, Fama and French 1992, low cash-flow-to-price ratio, LSV, 1994, high dividend yield Keppler 1991 and low Price-to-earnings-growth, peters, 1991.While growth shares are regarded as shares that have high values of those valuation indicators described above and low dividend yield.

2.1.1 Price-to Earnings

In a study of over 500 stocks in US spanning a period of between 1957 to 1971,sorting the stocks from lowest P/E (value)to highest P/E (growth) portfolio,Basu,1977 showed that the lowest rank portfolio(value) stocks had an average annual rate of return of 16.3% while the highest ranked portfolio had an average annual rate of return of 9.3%.The outperformance of value stocks over growth stocks is also consistent with Fama and French study in 1992 using the same US market, with value stocks outperforming growth stocks with 0.68 points. In examining the effect of P/E ratios in Contrarian strategy,LVS, studied stocks in NYSE and the AMEX and sorted them according to their P/E,the result of the five years holding period investment returns shows that portfolios ranked by low P/E had an average annual return of 19.0% with highest ranked portfolio returning on average over the same five year period,11.4% LVS.1994.This consistency of the value stocks performance prompt Bauman, Conover and Miller to study 20 other established markets to understand if the performance of value stocks and growth stocks will yield similar results as that of the US market, in a study spanning 10 years, the result of their study also showed that value stocks outperformed growth stocks in all the market studied with different valuation indicators including price-to-book ratio introduced by capaul-Rowley-Sharpe, Bauman, Conover and Miller,2000.They also found that the performance observed had a firm size effect.

Using updated data, Chan and Lakonishok, 2004 provided further evidence of the earlier results obtained from other studies done even when sales-to-price ratios were included in the valuation indicator. Their result also shows that irrespective of whether small cap or small cap stocks are considered, the value stocks outperformed growth stocks. They used the same methodology to study non-Us markets and observed the same result reported earlier by Bauman, Conover and Miller

2.1.2 Book-market ratio

This valuation method is used to identify shares that are trading in the stock market at either below their book value or above their book value. Value shares are classified as shares that trade at the stock market at less of their book value or intrinsic value while growth shares are classified as shares that trade at the stock market at above their book value.

Fama and French championed the evidence in further support of the performance of value stocks over growth stocks in their study of non-financial stocks in US market between 1963 to 1990, taking into consideration another factor in stocks returns, the market cap. They observed that stocks with lowest price to book returns better performance than stocks with highest price to book value and that in all the portolios,companies with smaller market cap also performs better than large cap stocks,Fama and French 1992.

The evidence above was also supported by Debondt and Thaler, ranking stocks in the US market based on their book value observed that returns of portfolio for lowest price/book value stocks performed significantly better than portfolios formed with highest price/book value stocks, taking cognisant of their returns prior to portfolio formation and after portfolio formation, Debondt and Thaler 1987.

International evidence of this study was provided by Sharpe, Capaul and Rowley 1993, in a study of major markets around the world including the US, stocks were ranked according to price to book value and formed into portfolio of value and growth. This study also included the UK market for a period of between 1981 to 1992.The result of their analysis shows that in all of the countries studied, value stocks outperformed growth stocks.

2.1.3 Cash-Flow-to Price

The ranking of portfolio according to cash-flow to price, in evaluation of value and growth investments, showed according to evidence from LSV, 1994 for portfolio formed between 1968 to 1990 in the US market. The result of the returns show that portfolio ranked by lowest price to cashflow ratio outperformed portfolio ranked by highest price to market ratio by 10.6 points. Further evidence from returns on strategy discovering undervalue stocks based on low price to cashflow ratio,Keppler,1991 noted that, empirical evidence from international equities supports that low price to cashflow stocks outperformed high price to cashflow stocks.

2.1.4 Price-Earning Growth

As already noted earlier one of the shortcomings of using P/E ratio to classify shares is its inability to differentiate two comparable companies. Though the classification of investment styles using PEG ratio is still very scarce in literature. But the increasing significance of its use can not be overlooked. According to Estrada,2005,studies on portfolios sorted by PEG, lowest ranked portfolios outperformed highest ranked portfolios between 1982 to 1989.That was the earliest study done in PEG valuation indicator according to literature. Despite the shortcomings of using short-term earnings growth in estimating PEG, Easton 2003, observed a high correlation between estimation of expected rate of return of stocks using PEG when compared to estimation of returns using P/E ratio. In his study of how analysts use earnings forecast in generating stock recommendation Bradshaw, 2004, observed that analysts incorporate earning forecasts in their recommendation and the tests indicate that they value and recommend stocks based on the PEG ratio. Further evidence of the increasing use of PEG ratio in ranking stocks by analysts in international context was provided by Barniv, Hope, Myring and Thomas, 2009.They observed that the strong positive relationship between analysts recommendation in US using PEG ratio also extend to other strong investing countries, although they noted that there is a negative relationship between analysts recommendation and future returns. Despite this evidence to show that analysts use PEG ratio to recommend stocks, there seems to be less work on the evaluation of performance of value stocks and growth stocks using this valuation indicator. Though one can argue that since there is no relationship between analysts recommendation and future returns of stock, according to empirical evidence and since according to evidence from Estrada, 2005, the holding period return of stock valuation using P/E outperforms PEG ratio, that there should not be any need for study of the investment style performance of stocks using PEG ratio. But when assessment of returns includes the risk factor, the PEG ratio outperforms the P/E ratio in most risk assessment measures and assessing stocks return will be incomplete without implying the risk factor.

2.2 Value Vs Growth Investment Style in UK

The UK market being one of the strong investors market in the world, evidence of the performance of value shares and growth shares is consistent with the results of the US markets and other international markets described above irrespective of the valuation indicator used. When classified according to book to price value, Sharpe, Capaul and Rowley 1993 observed that low book to price shares outperformed high book to price shares by 31.5%, between 1981 to 1992. The performance of the Uk market is consistent with the trends in other developed market when sorted by price to earnings.Fama and French 1998 found that annual returns of value stocks in UK were 17.46 while that of growth stocks were 14.81 in a data spanning between 1975-95.Before then according analysis of data from London Share Price database, showed that value shares(Lowest price/earnings ratio),had an annual of 17.76% while Growth shares (Highest price/earnings ratio) had an annual return of 10.80 in a data collected between 1961 to 1985 Tweedy, Browne 2009.Gregory,Harris and Michou (2001) updated the data from 1975-1998 and found that the average annual return over five years of portfolios sorted by earnings to price ratio show that the lowest rank deciles portfolio the (value)has a return of 24.62% while highest rank deciles portfolio (Growth)has a return of 20.64%.When ranked according to dividend yield, shares with high dividend yield are classified as value shares while shares with low dividend yield are classified as growth stocks. Levis 1989 observed that value stocks outperformed growth stocks by as much as 6.3% on annual investment return.

The consistency of these results has not been tested in the UK market using PEG ratio. The higher return on risk-adjusted measure of performance shown by stocks valuation using the PEG ratio over P/E ratio makes this study very important.

Though the consistence of value stocks over growth have not been attributed solely to either the valuation indicator used or investment style adopted.Fama and French attributed it to the riskiness of value stocks, but in his conclusion of empirical evidence studied, Chan and Lakonishok stated that investor’s behaviour could be at the root of this results.

2.4 Summary

This review of major studies in this section shows enormous work that have been done in examining value and growth stocks using different valuation indicators and still providing consistent results of outperformance of value stocks. Since there is still not a consensus among researchers as regards the explanation of this results. It shows that there is still ongoing research into understanding a testable rationale for choosing value stocks over growth stocks.

3.0 Methodology.

Quantitative research design connects research questions to data Punch 2005, p63. The research design will be to compare the performance of value and growth shares in the London Stock Exchange over the period to be studied. The P/E and PEG ratio will be the valuation indicators to be used.

In this section, the data collection and source procedure, portfolio formation approach, performance measures to be used in analyzing the research topic will be discussed. A framework of the timeframe to undertake this research will also be set.

3.1 Data Collection and Sampling

The benchmark to be used in the analysis will be the FTSE350 Index. The FTSE350 Index is made up of 94 percent of the market capitalization by values, which will be enough to describe the market performance. The data will be collected on monthly time series. The P/E ratio data will be sourced from DataStream and the preceding growth rate also from DataStream will be used to adjust the P/E in other to get the PEG ratio for the companies. The data type described above will be the primary data. The secondary data will be sourced from existing financial academic journals, unpublished conference papers and revered textbooks.

3.2 Portfolio Formation

The approach to be used in forming the portfolio will be that at the end of each preceding year that the portfolio will be formed, the data P/E ratio and earnings growth rate of the companies in FTSE350 in that year will be sourced from DataStream i.e,the portfolio to be held in 1992,will be formed at the end of 1991 and assumed to be held through-out 1992 before being sold at the end of 1992.The P/E and PEG ratio will be ranked from the lowest to the highest and divided into deciles. The lowest ranked deciles will form the value portfolio in that order to the highest ranked that will form the growth portfolio.

3.3 Analytical Methodology

Analysis of the data will be done using both economic and statistical analysis. Each analytical method intends to answer fundamental questions regarding the outcome of the results.

3.3.1 Economic Analysis

This is the analysis of the performance of the returns of both the value and growth portfolios. It intends to answer the question of which portfolio outperforms the other within the period investigated. It also intends to answer the question of which of the two valuation indicators gives a better indication of the performance of the portfolios i.e using P/E ratio and PEG ratio, which of them is mostly useful in valuing stocks. In economic analysis the financial measure of risk and returns is the most commonly used method. The method of risk and return assessment depends also on the method of risk to be considered. The risk assessment method to be adopted in assessing the portfolio for this research includes; Sharpe ratio which uses the standard deviation as a measure of risk, the Treynor ratio that uses the beta of the security as a measure of risk. These two ratios uses the two traditional measures of risk i.e. the beta and standard deviation in evaluating performamce.Another method of evaluation performance to be used in this research though not widely used as the former two, is the risk-adjusted return. Since the aim of this research is to examine the performance of value and growth shares, the use of risk-adjusted return will be appropriate since it is useful in comparing portfolios at different levels of risk Bacon 2004.The risk and return will be evaluated using the FTSE 350 Index as the benchmark and the returns of the shares in each portfolio will be based on value-weighted with the market capitalization as the value.

3.3.2 Statistical Analysis

The statistical analysis intends to answer the question of the significance of the variable in explaining the returns. In this case the significance of P/E and PEG ratio in explaining the returns of the portfolio. The hypothesis to test here will be the statistical significance of the valuation model used in explaining the outperformance of one of the valuation indicator used over the other.

3.4 Timeframe.

Given that the research will be carried out over a period of three months, the following timeframe have been set to actualise the objective.

Month

 

Week

Activity

June -09

 

1

Make corrections on

submitted proposal based on

Feedback given from supervisor.

2

Initiate collection of primary data

And formation of portfolio

3

Meet With Supervisor –discuss

primary research plan and get the

required advice

4

Start data analysis

 

Jul-09

 

1

Expand more on secondary research, review introduction and literature review

2

3

4

Meet with supervisor to review

Data analysis done so far

Aug-09

 

1

Data interpretation

 

2

Meet with supervisor to discuss results

 

3

Bring findings together and prepare

 

conclusion and recommendation

4

Put finishing touches to project

and prepare to submit.

 

4.0 Results/Conclusion

The main aim of this proposal to show how the examination of value and growth shares in the UK market using PEG ratio can give a further insight on the consistency of the outperformance of value stocks from available empirical evidence. And since the study of PEG ratio in ranking stocks for classifying investment style is still scarce in literature especially in UK market. This will be an attempt in exploring that gap in knowledge which is the aim of this paper.

Reference

Alan Gregory,Richard D.F. Harris and Maria Michou,2001.”An analysis of Contrarian Investment Strategies in the UK”.Journal of Business Finance and Accounting.[online] available from http://web.ebscohost.com/ehost/results?vid=2&hid=2&sid=f31a6990-1cab-4df

Accessed 4th May 2009

Bacon, Carl R., 2004.Practical Portfolio Performance Measurement and, Attribution. [Online]Available, from

http://www.netlibrary.com/Reader/

Accessed 7th May 2009

Basu, S, 1977.”The lnvestment Performance of Common Stocks

in Relation to their Price-Earnings Ratios: A test of the efficient

Market Hypothesis”. Journal of finance.[online] available from http://web.ebscohost.com/ehost/results?vid=2&hid=6&sid=eea529b4-b6af

Accessed 9th May,2009

Blake,D.,2000.Financial Market Analysis.2nd ed. New York. John Wiley & Sons

Bradshaw, Mark T, 2004.”How do Analysts Use Their Earnings Forecasts in Generating Stock Recommendations”. Journal of Accounting, Review. [Online]Available, from

http://web.ebscohost.com/ehost/results?vid=2&hid=5&sid=c76dc2c

Accessed 1st May 2009

Browne,H,C.,2008.”What Has Worked In Investing:Studies of Investment Approaches and Characteristics Associated with Exceptional Returns”,[online] avalaible from http://www.tweedy.com/resources/library_docs/papers/WhatHasWorkedInInvesting.pdf Accessed 6th May 2009.

Capaul, Carlo, Ian Rowley and William F.Sharpe, 1993.”International Value and Growth Stock Returns”. Financial Analysts Journal. [Online]Available, from

http://web.ebscohost.com/ehost/results?vid=2&hid=9&sid=dd4883d8-87e0

Accessed 8th May 2009

DeBondt,Werner F.M,and Richard H.Thaler,1987.”Further Evidence on Investor Over-reaction and Stock Market Seasonality”.Journal of Finance.[online]Available,from http://web.ebscohost.com/ehost/results?vid=2&hid=2&sid=a32c4cf5

Accessed 9th May 2009

Donald., 2008.Behavioural in Finance: Handbook of finance.Vol II.New York. John Wiley & Sons

Estrada I.,2005.”Adjusting P/E ratios by growth and risk:The PERG Ratio”.International Journal of Managerial finance.[Online].Available from http://campusmoodle.rgu.ac.uk/mod/resource/view.php?id=119391

Accessed 6th May 2009

Fama, Eugene F, and Kenneth R. French., 1992.”The Cross-

Section of Expected Stock Returns”, Journal of Finance, [online] available,from

http://web.ebscohost.com/ehost/results?vid=2&hid=6&sid=0c99df5a-205e-4df0-987

Accessed 7th May, 2009.

Fama, Eugene F, and Kenneth R. French.,1998.”Value versus Growth:The International Evidence”,Journal of Finance.[online] available from

http://web.ebscohost.com/ehost/results?vid=2&hid=6&sid=0c99df5a-205e-4df0-9877

Accessed 7th May,2009.

Keppler,A.Micheal,1991.”Further Evidence on the Predcitability of International Equity Returns”.Journal of Portfolio Management.[online]available,from http://web.ebscohost.com/ehost/results?vid=2&hid=6&sid=4934febe-586b-4967-9360-b16366b971f1 Accessed on 9th May 2009.

Keppler,A.Micheal,1991.”The Importance of Dividend Yields in Country,Selection”.Journal,of,Portfolio Management.[online]available,from http://web.ebscohost.com/ehost/results?vid=2&hid=6&sid=4934febe-586b-4967-9360-b16366b971f1 Accessed 9th May 2009

Lakonishok, Josef, Andrei Shleifer, and Robert W, Vishny. 1994.

“Contrarian Investment, Extrapolation, and Risk”. Journal of

Finance.[online]available,from

http://web.ebscohost.com/ehost/results?vid=2&hid=6&sid=0c99df5a-205e-4df0-987

Accessed 6th May,2009

Louis K.C. Chan and Josef Lakonishok.2004.”Value and Growth Investing: Review and Update.”

Financial Analysts Journal.[online]Avalaible,from

http://web.ebscohost.com/ehost/results?vid=2&hid=9&sid=dd4883d8-87e0

Accessed 9th May 2009

Mario Levis, 1989.”Stock Market Anomalies: A Re-assessment based on the UK Evidence”.Journal of Banking & Finance. [Online]Available from http://www.sciencedirect.com/science?_ob=PublicationURL&_tockey

Accessed 7th May 2009.

Michie,R.,1999.London Stock Exchange:A history.1st ed.New York:Oxford University Press Inc.

Peter D.Easton, 2003.”PE Ratios, PEG Ratios, and Estimating the Implied Expected Rate of Return on Equity Capital”.Working paper [online] Available, from

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=423601 Accessed 9th May 2009

Peters,Donald J,1991.”Valuing a Growth Stock”.Journal of Portfolio Management.[online]available,from http://web.ebscohost.com/ehost/results?vid=2&hid=6&sid=4934febe-586b-4967-9360-b16366b971f1 Accessed on 9th May 2009

Punch, K., 2005.Introduction to Social Research: Quantitative and Qualitative Approaches.2nd ed.London: Sage Publications Ltd

Ran Barniv, Ole-Kristian Hope, Mark Myring and Wayne B.Thomas, 2009.”International Evidence on Analyst Stock Recommendations, Valuations, and, Returns”.Working Paper. [Online]Available, from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1350616 Accessed 9th May 2009

Xinzhong Xu,2001.”Discussion of An analysis of Contrarian Investment Strategies in the UK”.Journal of Business Finance and Accounting.[online] available from http://web.ebscohost.com/ehost/results?vid=2&hid=2&sid=f31a6990-1cab-4df

Accessed 4th May 2009

W.Scott,Bauman,C.Mitchell,Conover,and,Robert E.Miller,1998”Growth Versus Value and Large-Cap Versus Small-CAP Stocks in International Markets”.Financial Analysts Journal.[online]Availablefrom

http://web.ebscohost.com/ehost/results?vid=2&hid=9&sid=dd4883d8-87e0-4dac-

Accessed 9th May 2009

The post Examination of the UK Stock Market appeared first on mynursinghomeworks.

[ad_2]

Source link

Case Study on Infosys Technologies Ltd

[ad_1]

2.1 AIM

The aim of the research is to evaluate the activity of Mergers and Acquisitions as well as the consolidation that is taking place in the IT sector of India with a special emphasis on Multi National Enterprises. The research will focus on deducing conclusions on the corporate and business level strategies followed within Infosys Technologies Ltd and relate the case to the Indian IT industry as a whole.

2.2 OBJECTIVES

To achieve the above mentioned aim, the following objectives are formulated:

1) To understand the current state and potential of the Indian IT industry by analyzing its strengths and weaknesses.

2) To study the trends in outward foreign direct investment (OFDI) that is related to IT sector in India.

3) To evaluate the performance of Indian IT companies (MNC’s) which are involved in international M&A’s in the recent past.

4) To throw light on the corporate and business level strategies implemented at Infosys Technologies Ltd and hence relate the case to a wider scenario.

3. CONTEXT:

3.1 Rationale

The proposed research would be of interest to the emerging corporate companies in the Indian IT industry as it would provide valuable insights into the trends and the process of Mergers and Acquisitions happening at the international level.

3.2 Background

Infosys is an Indian based IT company which specializes in IT, BPO and Engineering services which cuts across many business verticals. The company has been strategically shifting its operations into technology consulting and is directly competing with giants like Accenture India (a subsidiary of Accenture plc) and IBM which have long been established in this domain. With the entry of foreign companies into India, the market space for consulting and IT services companies is becoming crowded and the companies are involved in a tough fight for market share. On the other hand the IT sector is growing rapidly with annual revenues touching $70 billion. IT business in India displayed a resilience and tenacity to the global financial crisis and is not as much affected like in the US and Europe. Infosys is going global and is acquiring companies abroad and is also keen towards strategic alliances and horizontal mergers (Infosys, 2009). With this strategy of expanding globally it has recently acquired, McCamish Systems LLC in USA and Mainstream Software Pvt Limited in Australia. It is also rumored to buy out few other companies in various locations also in the list is Axon Group PLC, UK.

4. LITERATURE REVIEW

This part of the research proposal gives a brief description on the research being carried out on the subject of M&A in the Indian IT industry as well as the outward foreign direct investment by Multinational Companies in the IT sector.

4.1 Indian IT Industry and its footprint abroad

IT industry has been a growth engine for Indian economy since the last decade. In the last five years, IT exports have more than tripled with enhanced service offerings, diversified geographic base and focus on new verticals (Ramachandran, 2006). The affinity towards cross-border mergers and acquisitions by Indian IT firms is mainly due to the lack of opportunities in the domestic market. The inertia in the private sector of India towards computerization has led the software companies to provide services to international clients. The trend of foreign direct investment both inward and outward began way back in 1991 after the liberalization of Indian economy which provided a real boost to the IT industry. Post liberalization era proved to be very fruitful for the Indian economy and for the IT sector in particular. Five years from then the software exports quintupled due to global shortage of software professionals. There was a striking increase of export revenue from $128 million in 1990-91 to $700 million in 1995-96.

Infosys Technologies Ltd. was the first Indian IT company to be listed in the NASDAQ stock exchange. This step by Infosys paved a golden path for other companies which entered into the foreign markets later on. The total size in terms of revenues of Indian IT industry is estimated at $60bn as of 2009. A total of 2.23 million people are directly employed whereas an additional 226,000 work force is employed indirectly in related services that depend on the IT sector. IT industry in India constituted for 5.8% of the GDP in 2009 as against 1.2% in 1998 (NASSCOM, 2009). The current mood of the IT sector is “cautious optimism”.

4.2 Mergers and Acquisitions by Indian Firms

Research in the segment of OFDI (Outward Foreign Direct Investment) suggests that the Mergers and Acquisitions happening in the home country (within India) are reaping more results than the one taking place outside the country. The reason for this phenomenon might be the advantage of consolidation efficiencies and investor confidence (Forbes, 2008). The above statement is also backed up by the fact that the Indian companies which are making cross border acquisitions in developed countries like the Unites States and United Kingdom are underperforming in overall market returns. In spite of this poor performance, Indian firms have continued to venture abroad with $23bn worth of acquisitions in the last decade. The trend of cross-border acquisitions is interestingly more dominant in the smaller firms which are not listed Forbes. According to Anushri Bhandari of Watsonwyatt.com, acquirers in the high-tech sector are at a higher risk of failure when compared to the other sectors. In the IT industry which is a part of the high-tech sector, excess returns after the acquisition is reduced by 33% on an average at the end of one year after the deal. Due to heavy fragmentation in the high-tech sector, most acquirers are not able to achieve excess returns even after few years of the deal because of seeking expertise in an unrelated segment.

Why Indian Companies venture abroad? This question is answered in the literature with enough empirical data. The literature says, cross-border acquisitions are due to several reasons out of which few are:

Economic Factors: The growth in international M&A by Indian IT firms can be attributed to the overall strength and prosperity of the Indian economy. Increase in the value of Rupee backed up by the performance of stock markets as well as the high interest rates abroad have boosted M&A activity for years.

Competitiveness: Many Indian firms are venturing abroad to explore new markets and acquire expertise in research and development. Other striking for cross-border acquisitions is to build a global brand image (Banerjee, 2005). This factor of competitiveness is influencing more of the companies in IT sector than any other sector as the companies enter new markets in search of ‘domain expertise’ (Sengupta, 2006).

5. RESEARCH METHODOLOGY

The methodology followed for conducting research acts as a foundation to the entire process of research and will determine the means for reaching at the conclusions. The methodology as understood by the researcher describes the way in which the research is carried out and deals with the techniques employed to collect data from primary and secondary sources. It also deals with how the data has been processed and furnished to obtain valuable conclusions and findings at the end of the research.

5.1 Research Paradigm

According to Dash (1993), academic research is about exploring and understanding social phenomena which are educational in nature, in this process academic research deals with questions that can be investigated in a satisfactory manner, and the methods which enable such satisfactory investigation.

The research paradigm adopted in the dissertation will be a blend of both positivism and anti-positivism (phenomenology). In view of the fact that the proposed research needs quantitative methods of analysis which follow the principles of empiricism like that of surveys, Positivist paradigm is followed. Anti-positivist or Phenomenological paradigm is used because a Case study is being carried out which involves observation and firsthand experiencing than just relying on secondary data.

5.2 Research Approach

The research Design and Approach is based on the Saunders Onion Model which suggests the layer wise approach (Saunders, 2000). This model advocates the top down approach of research starting from the outer layer of onion i.e., having a clear research philosophy, which is having positivism, interpretive, and realism philosophy and then moving inside the onion to have a reasonable research approach which may be deductive or inductive, like wise moving to the inner core of the onion for the collection of data through sampling, interviews, secondary data, questionnaires etc.

Figure: Saunders Onion Model of Research

Source: Research Methods for Business Students, Saunders, M. (2000).

5.3 Sampling procedure

Well conducted probability samples provide the researcher with the ability to gather information from a relatively small number of members of a large population and accurately generalize the results to the entire population. In addition, probability samples enable the researcher to calculate statistics that indicate the precision of the data (Fairfax, 2003).

The sampling procedures for this study would be a mixture of both Probabilistic and Non-Probabilistic sampling methods. The methods would be, Judgmental Sampling and Convenience sampling in the Non-Probabilistic methods because the study demands understanding the trends in international takeovers by Indian IT industry with an emphasis on the strategies followed by various corporate firms. Convenience sampling would be the most predominantly used sampling procedure in this research as the researcher is going to choose the persons to interview based on his convenience. In the Probabilistic methods of sampling, the researcher would employ, simple random sampling and cluster sampling in view of their relevance and aptness to the research.

5.4 Data collection Methods

As the study is dependent mostly on secondary sources of data like research papers, the resources needed are uninterrupted access to e-sources which is available from the University digital library. At times when the Primary research is carried out as a part of the case study, the researcher can make arrangements to establish the communication channel between the resource persons via telephone, email and direct meetings whenever necessary. Apart from this, there are no special resources required for the study.

The proposed study will take into account the secondary data available on the subject and then shall rely on the Primary data by examining an organization keenly as a part of the case study. The research methodology comprises of questionnaires and in-depth interviews which provide necessary qualitative data for carrying out the research. The type of methodology being employed is Grounded theory and a bottom-up approach would be followed in achieving the research objectives. The information obtained from questionnaires and interviews is finally analyzed and appropriate conclusions are made. The findings are presented in the form of a report and are delivered as recommendations.

5.5 Data Analysis

To analyze the data collected through primary and secondary sources, the researcher employs various mathematical and logical techniques available for simplifying and synthesizing the data. The methods of data analysis in this research include qualitative analysis as well as quantitative analysis. In qualitative analysis, Grounded Theory is used dominantly over other techniques because of its robust applicability and reliability. Analytic Induction and Logical Analysis which involves pictorial representation of data like flow charts, matrices, graphs and charts will also be an appropriate data analysis technique for carrying out the proposed research. Microsoft Excel would be rigorously used to build the graphs, bar charts and pie charts accordingly.

5.6 Validity and Reliability

The research conducted using primary sources of data is prone to errors especially sampling errors and reliability issues. At each stage of the research while collecting data as well as analyzing it, the researcher need to take due care of these creeping errors and should arrive at foolproof conclusions.

To avoid reliability issues which arise due to participant bias which is the most common form of sampling error, simple random sampling technique will be used. In this technique, the researcher would first prepare an exhaustive list of all members from the interest group and from this list the sample is drawn giving equal chance for each member of the list to be drawn for conducting interview.

6. Ethical Considerations and Access Negotiation

The researcher would like to employ purely ethical means of data collection with due permissions and front door access to the data required from the organization being examined. As the rationale of the research justifies the way how the research is useful for the organization, the researcher can negotiate fair means of access to the organization to do case study.

7. Gannt Chart

Activity

September

Oct – Nov

December

Week Number

1

2

3

4

5

6

7

8

9

10

11

12

1.Holiday

2. Read Literature

3.Finalise Objectives

4. Draft Literature Review

5. Read Methodology Literature

6. Devise Research Approach

7. Draft Research Strategy and Method

8. Develop Questionnaire

9. Pilot Test and Revise Questionnaire

10. Administer Questionnaire

11. Enter Data into Computer

12. Analyse Data

13. Update literature read

14. Complete remaining chapters

15. Submit to tutor and await feedback

16.Print, bind

17. Submit

The post Case Study on Infosys Technologies Ltd appeared first on mynursinghomeworks.

[ad_2]

Source link

Information System for Heavy Vehicle Weighing

[ad_1]

AN INTEGRATED INFORMATION SYSTEM FOR HEAVY VEHICLE WEIGHING AT TRAFFIC CONTROL CENTRES: A FREE STATE PROVINCE CASE STUDY

1. INTRODUCTION

1.1. Topic Discussion

The research study aims to develop a conceptualisation of an integrated information system. In this research study an integrated information system refers to a computerised system that shares data from a central database or between multiple databases. Traffic control centres refers to heavy vehicle weighbridges that make use of computerised system to capture and calculate vehicle masses.

1.2. Background

There are nine provinces in the Republic of South Africa and Free State Province is the central province of the nine provinces and most heavy vehicles from the other eight provinces frequently drive through the Free State Province. Anecdotal evidence has shown that there are shortcomings in monitoring; control; analysis; and reporting of the results of activities in relation to the overload control operations at Traffic Control Centres in the province.

For any organization to function properly in current competitive and ever changing markets best informed business decisions have to be made and organisations are forced to find means of operating effectively and in most cost effective ways. Traditional competitive methods are no longer dependable and organisations have to stay up to date with the technologies and this applies to government projects and operations as well (Abeysekera 2005).

Organizations use different strategies and resources to achieve business goals, and knowledge is one of the critical resources in current business practices. Organizations cannot afford not to manage knowledge properly and therefore the use of Knowledge Management (KM) systems and technologies to support objectives of the organization is of paramount importance (Aggestam 2008).

Knowledge management is defined as: the practise of selectively applying knowledge from previous experiences of decision making to current and future decision making activities with the purpose of improving the organization’s effectiveness (Alavi, Leidner 2001). The use of knowledge management models and information systems has shown success as a tool for knowledge creation. The procedure for collecting, storing, retrieving, and transforming data into knowledge-based media can be designed and documented (Wade, Hulland 2004).

System integration is being used as an option for solving these problems by other organizations. When computers became popular as a tool in strategic planning process, system integration was confined to technical aspects such as connecting computer hardware components. But recently information technology has grown so much that knowledge creation is as important, and integration is being used in software, data and communication as well (Abeysekera 2005).

This research will focus on using system integration as a way for data and knowledge management; and will also include communication processes and data integration methods. Data integration is the process of combine data residing at different sources and providing the user with a cohesive view of the data (Lenzerini 2002). The researcher feels it will be worthwhile to investigate the current operations at weighbridges with the aim being to assist in addressing the problems being experienced by traffic control centres and decision makers.

2. PROBLEM STATEMENT/ RESEARCH QUESTIONS

2.1. Problem Statement

Free State Province is one of the nine provinces in South Africa and has three traffic control centres that are used for heavy vehicle weighing (see Appendix A) as recorded on the “South African Long Term Overload control Statistics (1995 to 2009)” (Republic of South Africa. Department of Transport July, 2010).

The main problem is that thereare shortcomings in monitoring; control; analysis;and reporting of the results of activities and operations in the overload control processes at Traffic Control Centres for heavy vehicle weighing. During preliminary research the researcher found that the problems are caused mainly by traffic control centres operating independent of each other which further creates problems as listed below:

  1. When vehicles are weighed information such as company names; vehicle registration numbers; drivers licence details etc. should be collected and documented for record keeping and for further analysis of overloading trends in the province. Because of the independent operations there is no consistency when this information is captured and in other cases it is not captured at all. This usually creates problems between heavy vehicle drivers/owners; traffic officers and weighbridge operators as it’s time consuming for drivers when they have to wait every time this information is captured.
  2. In the Free State Province there are three operational weighbridges that are using computerised systems for heavy vehicle weighing, but these weighbridges are operating independent of each other using site based data storage systems. This type of operation causes a problem as law enforcement officers at one weighbridge have no idea of what is going on at the other weighbridges and makes it difficult to implement an effective law enforcement strategy.
  3. Authorities develop policies and make strategic and operational decisions based on the collective operational reports; which are produced based on the data collected from the traffic control centres. As data is collected independently and not following a standard procedure, the produced reports do not give a true picture of the overloading trends and therefore the decisions that the authorities will make are usually based on incomplete information.

2.2. The Research Question

What is the state of the overload control processes at traffic control centres for heavy vehicle weighing in the Free State Province in terms of monitoring; control and analysis of the results of weighing activities?

2.2.1. Sub-Questions:

To address the 3 sub-problems the following sub-question must be answered clearly during the research process.

  1. What are the data collection processes and technologies currently being used at each Traffic Control Centre?
  2. What are the applicable integration methods that can be used to integrate the collected data from the three traffic control centres?
  3. How can these data be made available to authorities for reporting and decision making processes?

3. LITERATURE REVIEW

3.1. Why overload control?

Overloaded heavy vehicles are responsible for approximately 60% of the damage to the road network, in comparison to legally loaded heavy vehicles that cause only about 40% of the damage (CSIR, Roads and Transport Technology 1997) and this means that heavy vehicles overloaded or not have a significant impact on road damage. Overloaded heavy vehicles decrease the life span of the road structure with added costs for maintenance and rehabilitation of the road pavement. The management and protection of the road network is necessary, while maintaining the economic base of the freight industry. While heavy vehicle operators profit from weak law enforcement, overloaded vehicles are damaging roads, the annual road budget in real terms is declining, and the condition of roads is deteriorating (Pillay, Bosman 2001).

Roads and streets are the most important transport communication medium in South Africa and are used by everyone on a daily basis. Roads also play an important role in promoting economic growth and the standard of living for the population and by means of roads it is easy for the communities to access markets, places of work, health facilities and educational institution (CSIR, Roads and Transport Technology 1997). The sixth annual state of logistics report 2009, shows that the internal logistics cost due to inadequate road conditions is experienced by most transportation companies in the country. This affects the competitiveness of the country, because as the logistics costs increase, the cost of products in the global market increases as well (CSIR, Imperial Logistics, Stellenbosch University 2009).

South Africa has 118static weighbridges utilised for heavy vehicle weighing, out of which only 78 are operational; 30 are not usable and 10 are usable but currently not operational as recorded on the “South African Long Term Overload control Statistics (1995 to 2009)”. These include provincial weighbridges, municipal weighbridges at testing stations that are also utilised for overload control, weighbridges that are operated by toll road concessionaires in cooperation with provincial road authorities and private weighbridges that are utilised for overload control (Republic of South Africa. Department of Transport July, 2010). The general heavy vehicle weighing processes is shown on (Appendix B)

In the Free State Province there are three operational weighbridges that are using computerised systems for heavy vehicle weighing, but these weighbridges are operating independent of each other using site based data storage systems. This type of operation causes a problem as law enforcement officers at one weighbridge have no idea of what is going on at the other weighbridges and makes it difficult to implement an effective law enforcement strategy.

3.2. Information systems

On a study done by the Council for Scientific and Industrial Research (CSIR) for the Republic of Senegal computer systems where to be the key of all the weigh bridges involved in heavy vehicle overloading control and computer systems are to be used at all weighbridges for data collection and information management. An integrated computer and management information system was implemented, it was recommended that a computer system should be connected to all weighing equipment installed at the weighbridge for an automatic collection of data produced by this equipment (Nordengen et al. 2006)

Most information systems are made out of various components classified in three different types: application programs, information resources like databases and knowledge bases, and user interfaces. These components are integrated in such a way as to accomplish a concrete (business) purpose (Guarino, (Roma) Consiglio nazionale delle ricerc, Istituto 1998) and information resources include human resources and historical knowledge with the main purpose of supporting the creation, transfer, and application of knowledge in organizations and notably is the fact that systems designed to support knowledge in various organisations may not appear completely different from other forms of information systems, but their main focus is on allowing users to give a meaning to information and to capture some of their knowledge in information and data (Alavi, Leidner 2001).

Knowledge is not a “thing” or a system and therefore cannot be stored or managed. It is an active continuous process of relating (Snowden 2002). The purpose of this paper is not to develop a knowledge management system similar to those used by organisations which are focused on maximising profits by using competitive intelligence for competitive advantage in the market place. But to develop a concept of an information system that can be used to create, store, analyse and present information using the acceptable knowledge management models for knowledge creation and sharing.

The thinking of knowledge management was used by Nonaka and Takeuchi in 1995, making arguments that for organisation to succeed they have to focus on knowledge creation and distribution (Nonaka, Takeuchi 1995). Nonaka further in 2000 elaborated that knowledge involves a relationship between tacit and explicit knowledge, which gave birth to a model for the dynamic process of organisational knowledge creation, maintaining, and exploitation. The model supported the four knowledge process namely: 1 – Internalisation; 2 – Externalisation; 3 – Socialisation; and 4 – Combination (Nonaka, Toyama & Konno 2000).

Organisations use information systems for various reasons and purposes but for knowledge management all systems are based on the General Knowledge Model as described by Brian Newman on his paper “A Framework for Characterizing Knowledge Management Methods, Practices, and Technologies”. The model organizes knowledge flows into four primary activity areas: knowledge creation, retention, transfer, and utilization (Newman, Conrad 2000) as represented in ( Figure 1 ).

Knowledge Creation – This comprises activities associated with the entry of new knowledge into the system, and includes knowledge development, discovery, and capture.

Knowledge Retention – This includes all activities that preserve knowledge and allow it to remain in the system once introduced. It also includes those activities that maintain the viability of knowledge within the system.

Knowledge Transfer – This refers to activities associated with the flow of knowledge from one party to another. This includes communication, translation, conversion, filtering and rendering.

Knowledge Utilization – This includes the activities and events connected with the application of knowledge to business processes.

3.3. Qualitative research

Qualitative research methods involve the systematic collection, organisation, and interpretation of textual material derived from interviews, observations and in documents (Malterud 2001). Qualitative research methods are being used increasingly in evaluation of studies, including evaluation of computer systems and information technology and qualitative data gathered primarily from observation, interviews, and documents are analysed by a variety of systematic techniques.

4. SIGNIFICANCE OF THE STUDY

This research study aims to contribute to the research area of business informatics, with the main focus being on the information management and information systems conceptualisation. and fills a gap in that information systems are not a primary concern for transport researchers. Transport researchers focus most of their research on transport economics and engineering, while neglecting the benefits that information systems can offer to the industry and this study will fill that gap. Basing the study to the knowledge management theories and models, this study will introduce the importance of using information systems to solve weighbridge operational problems and decision making processes by authorities.

Knowledge management theories and models provide a way to manage data collection and information sharing, which adds value to the organisation and industry as a whole (Barclay, Murray 1997). Knowledge transfer in organizations is the process through which one unit (e.g., a department, division, and a group) is affected by the experience of another (Argote, Ingram 2000). In the case of this study the various transport departments; traffic authorities; agencies and stakeholders involved with overload control in the country.

The study will help provide guidance to the concerned parties on the applicable ways to design and implement an information system that would be suitable for knowledge management purposes, which can be used for decision support. This will benefit traffic authorities doing law enforcement for overload control; agencies managing the heavy vehicle weighbridges and decision making authorities such as the department of transport.

5. RESEARCH METHODOLOGY

The applicable research paradigm for this research study is the interpretivist research paradigm using the qualitative research method. The interpretivist paradigm developed as a critique of positivism in the social sciences and shares the following beliefs about the nature of knowing and reality (Cohen D. 2006): Relativist ontology; assumes that reality as we know it is constructed intersubjectively through the meanings developed socially and experientially. Subjectivist epistemology; assumes that we cannot separate ourselves from what we know. The researcher and the researched object are linked such that who we are and how we understand the world is a central part of how we understand ourselves others and the world.

Interpretive studies assume that people create and associate their own subjective and intersubjective meanings as they interact with the world around them (Orlikowski, Baroudi 1991). This method of research adopt the position that our knowledge of reality is a social construction by human actors and the researcher uses preconceptions in order to guide the process of researching and furthermore interacts with the human subjects of the researched field (Walsham 1995).

For the purpose of this study, two groups will be involved, namely the management team from different authorities who are responsible for strategic planning when it comes to overload control processes and the general users who are operators at the weighbridges such as traffic officers and computer operators. The methodology will involve interviews, observation and analysis of the current operational procedures written in existing documents. This is in line with the interpretive approaches that rely heavily on naturalistic methods (Cohen D. 2006).

The following activities make up the major tasks of the study and the methodology that will be used to address them:

5.1. Theoretical Framework

The theoretical framework for the research study will be based on theories from information systems development and knowledge management. The main theory will be the activity theory, which will be supported by theories from the knowledge management models.

5.1.1. Activity Theory

Activity theory is a social-psychological theory that has its roots in the work done by the Russian psychologist Vygotsky during the beginning of the 20th century and this theory has been adopted by information systems researchers (Crawford, Hasan 2006). Further the same research study Dr. Kate Crawford and Helen Hasan shows that researchers in information systems have recognised the theory as being able to provide a rich holistic understanding of how people do things together with the assistance of sophisticated tools in the complex dynamic environment.

The theory is a philosophical framework for studying human interactions as a development process. The fundamental unit of analysis is the activity which has tree characteristics; material object; which is mediated by artifacts (tools, language etc.) and it is within a culture (Figure 1) and computer artifacts mediate human activity within a practice (Bardram 1997).

Relevance of the activity theory to the research study

The object of the research activity

This study investigates the activities of groups of people who are involved in the weighbridge operations for overload control law enforcement. The groups are using a combination of face to face and computer based interactions in carrying out their duties.

The Subjects of the activity being studied

Members of organisations and agencies that are using the weighbridge system or weighbridge site mangers; overload control decision makers; law enforcement officers, and traffic regulatory body.

The tools of the research activity

The researcher will collect data directly from the subjects, by making an observation of the activities and interviewing the decision makers personally.

The primary secondary and tertiary tools used

The tools of interest here are weighbridge field sheets, personal computers, server databases and other gadgets used to verify vehicle records.

The purpose of the activities and the motives of the subjects

The main aim of the activities will be to monitor the different operations, the possibility of integrating this operations and status of the available tools currently being used by various authorities and weighbridges.

5.1.2. Knowledge management models

The objective of the research study includes the improvement of the decision processes by authorities involved and knowledge models are the preferred option for knowledge creation and for decision making purposes.

The chosen KM model to base the study on will be the Nonaka and Takeuchi Spiral Model. KM Spiral is based on a theory that knowledge creation consists of a social process between individuals in which knowledge transformation is not just a unidirectional process but it is interactive and spiral, these involves the four process of: Socialization (from tacit to tacit knowledge); Externalization (from tacit to explicit knowledge); Combination (from explicit to explicit knowledge); and Internalization (from explicit to tacit knowledge) (Nicosord ).

This model was chosen because for the research results to make an impact the following groups will have to considered; (1) the heavy vehicle operators and drivers who are the external group; (2) the relationship between the traffic control centre staff and the drivers, which forms part of the socialisation; the weighing system users and traffic officers at the TCC who are (internal group); and the combination will address issues related to use of the data and how the data will be shared amongst all stakeholders.

5.1.3. Data Collection and Analysis Tools

The research study will use qualitative research methods and therefore will include unstructured interviews; observation and document analysis. The researcher is the main instrument for data collection and analysis and will make use of field notes and audio recording as those used by researchers following a grounded theory method (Savenye, Robinson 1996). Field notes will be used during observation and interviews and data will be grouped according to the KM cycles as indicated on (Table 1).

Qualitative data can be analysed using interpretive methods; the researcher will make use of the interpretive approach to present a holistic view of the data. Field notes from observations and interviews will be interpreted and grouped according to groupings in (Table 1). Official documents will collected from authorities will be analysed as raw data and then analysed the same way as field notes data.

Table 1: Field notes grouping structure

Acquisition of data

Refinement of data

Storage and Retrieval of data

Distribution of information

Presentation and Use of information

This will list all the processes and procedures followed to acquire and capture data at the weighbridge operations level.

This will list all the processes and procedures used to clean and refined the data.

This will be a grouping of all systems used to stored and retrieve data either manual or computerised.

This will also include the filing system or database types used. (e.g. SQL database)

This will group items that are addressing:

How data is transformed into information

How information is delivered to various stakeholders

This will group issues addressing the rules and procedures for the overall data structures and the presentation formats. As well as how data will be shared.

6. DELINEATIONS AND LIMITATIONS

This research study is limited to only the organisations and government departments that are involved in projects that deals with overload control in the Free State province and the research will focus on the static operational weighbridges as listed on the South African Long Term Overload Control Statistics (1995 – 2009) and Provincial authorities on the same list (Republic of South Africa. Department of Transport July, 2010), and further limiting it only to Bothaville; Senekal and Kroonstad weighbridges that are involved in overload control using computerised systems for heavy vehicle mass calculations.

7. ASSUMPTIONS

This research assumes that:

  • The concerned authorities have dedicated people who are focused on overload control and that they have the adequate knowledge to clarify and answer questions relating to the weighbridge operations and procedures used for making decision about the regulatory issues and law enforcement application.
  • The agencies operating the weighbridges have a dedicated person who is knowledgeable in the various Traffic Regulations for overload control
  • The technical people at weighbridges have a clear understanding of the processes followed on the collection of data, from raw data to computerised capturing and how this data is transferred to the authorities.

8. ETHICAL CONSIDERATIONS

Ethics examines the rational justification of what is morally right or wrong by involving a systematic application of moral rules, standards, or principles to researchers (Davison 2000). Ethics addresses issues of dignity; right to privacy; informed consent; honesty with professional colleagues; and safety.

This research study is focused on studying the systems used for operations at traffic control centres and will only involved human candidates as they are involved in the system. As human candidates will not be the subjects of the study, ethical considerations are not required as such. But the researcher will inform the participants about the nature of the research in advance and the privacy of participants will be protected as no names will be mentioned on the results of the study.

9. CHAPTER OVERVIEW

The research report will contain chapters as outlined in ( Table 2 ) and a brief description of what each chapter contains is included

Table 2: Chapters structure and brief overview

Chapter

Item

Description

Chapter 1

Introduction

Summary overview of all chapters

Chapter 2

Research Method

The methods used to carry out the study will be explained here.

Chapter 3

Theoretical Framework

This contains an explanation of the required theory for the study.

Chapter 4

Empirical Study

Results from the study will presented and explained.

Chapter 5

Analysis

Empirical study and theoretical framework will be connected here to explain the analysis of the study.

Chapter 6

Conclusions

Presentation of conclusions

Chapter 7

Recommendation and Future Research

Research study recommendations and area for future research will be shown here.

10. CONCLUSION

Reviewed Literature has indicated the problems experienced at the traffic control centres in regard to the overload control processes and the relevance of using information systems to address the problems. The importance of making use of integrated information systems in organizations and how they are being used as an advantage for simplifying processes and ensure reliability and integrity of data collection and analysis has been indicated as well. The Republic of South Africa can also benefit from information systems technologies on the fight against heavy vehicle overloading and improvement of weighbridge processes. The negative impact of overloaded heavy vehicles has been clearly indicated by previous research done by various researchers in the civil engineering discipline.

The post Information System for Heavy Vehicle Weighing appeared first on mynursinghomeworks.

[ad_2]

Source link

Psychological Impact of Interior Design on Hospital Patients

[ad_1]

Chapter 1. Introduction

The broad foundation for the problem that leads to the study is that current state hospitals are not producing the health care optimally possible. One of the factors for this is that there are a lack of beds for patients in need and also contributing to this is the lack of doctors needed as they spend their much needed time and attention on the patients just staying in the hospital to await a later scheduled scan or test such as MRI and CT . The reason for the reduced number of open patient space is not that the planning therefore was inadequate but because too many patients stay in hospitals unnecessarily and do not need medical treatment, but are mostly waiting for medical tests done with technologically equipment that are only due in a weeks time or more while already obtained for healthcare in the hospital. They then stay in the hospital for that period of waiting time, mainly because of the expense of transport to their homes and back and are nurtured in hospital. Another foundation problem is the waiting quee for state hospital patients and that current method applied to reduce this is not working well. A system should be incorporated consisting of more medical staff and nurses at entrance, so that there will be a staff member assigned to the task of attending and moderating the incoming patients with immediate effect on the level of the specific health situation importance, so that the patients with the most needed healthcare could be assisted first, which is in the patients best interest and also the hospitals, as they all have the same goal which is to look after a person best sense of well being.

This all contributed to a larger study into the aesthetics of general hospitals. The aim of the study is to throw light on the influence of aesthetics on the health and well-being of patients and the professional personnel, and to examine how aesthetic considerations are dealt with.The aesthetic area should not be a neglected field in the directions for the daily management of hospitals as it is of main importance to the patients’ well being and psychological healing process.

There are not many documentations for this specific studied field, but has been addressed as a major issue numerous times. However, it has been proven that the effect one’s environment has on the state of health and mental soundness is of utmost importance to developing a better psychological state, which leads to utmost physical well being if achieved. Contributing factors are design elements of surrounding spaces (natural and artificial light, room space layout, personal space, creating a sense of control, colour, all object in the space etc) influences a persons mood and also his/her physical condition highly and should not be included in a certain space created as an after thought, but as a core contribution in the process of designing a building for a specific purpose.

I have chosen hospitals and the health field as a main approach as I think this specific topic stated could have the most profound impact on as it is dealing with human lives, which should never be taken up lightly. A broad study will be done on the effect and impact our invironments on us physically and psychologically, but will focus on healing areas as they are much needed to be understood correctly and implemented as optimal as possible.

Chapter 2. Literature review

Most people with some kind of mental deterioration of organic or functional origin spend a majority of time in nursing homes and hospitals; therefore it’s of high importance to be accurate of what contribution the design of these environments makes to their lives. (Keen, J. 1989).

There are many facts on what people feel when they are in hospitals or any needed healing space and what the impact is of how the building looks, specifically considering theoretical facts are there for the purpose of the objects. (Keen, J. 1989).

Frameworks of why things are the way they are, are much generalized and have to be questioned especially considering the rapid growth of the population, a practical concern is also highlighted is the connotation people experience between privacy and home. (Keen, J. 1989). Links are also questioned between policy research and design by Keen.

The atmosphere aesthetic enhancements of an environment largely effects the person in the space, and as we’re dealing with healing spaces it is definitely a key factor as it reduces stress and anxiety, being inherently healing, increase patient satisfaction and promote health and healing which is of main concern. (Frampton et al. 2004).

Opportunities exist to make meaningful contributions in the healing environment that are likely to make a significant impact on health outcomes of human beings. (Frampton et al. 2004).

The role of the environment in the healing process is a growing concern

among health care providers, environmental psychologists, consultants, and

architects. (Arneill, A. & Devlin, A. 2003).

There is a lot of potential for thorough research to improve patient well-being and the current health care system through design, and we have the tools to do the kind of research that will lead to evidence-based design. (Arneill, A. & Devlin, A. 2003). Providers are also becoming more willing to support such efforts, therefore there should be no reason not to investigate these matters and should be included in the planning and building process as a must. (Arneill, A. & Devlin, A. 2003).

Issues which would be brought up would regard patients involvement with health care by focussing on patients need to be control as a human being in situations and circumstances which is personally uncontrollable. (Arneill, A. & Devlin, A. 2003).

The impact of the ambient environment (e.g., sound, light, art) and specialized building types currently emerging for patients with specific needs will be discussed. (Arneill, A. & Devlin, A. 2003).

The aesthetics of a space and created healing environment in regards with physical elements such as accessories, colour, furniture and room design, lighting, smell, sound, texture and thermal conditions have a significant tentative impact when counselling patients. (Heesacker, M. & Pressly, P. 2001).

By being more aware of these stated design factors counsellors can create better environments promoting healing, by using these elements as possible application to counselling settings. (Heesacker, M. & Pressly, P. 2001). This can also enhance the relationship between counsellor and patient and also contribute these factors to the studied field of observational, critical incident and experimental approaches regarding the physical context of counselling. (Heesacker, M. & Pressly, P. 2001).

“The underlying assumption is that the aesthetics of the hospital surroundings are often neglected”. (Caspari et al. 2006).

When considering aesthetic influence healing environments have on patients and medical personnel and why they are implemented the way they are, one has to look at the strategic plans when designing a health care centre. (Caspari et al. 2006).

The problem is that this major psychological influence is merely touched on in these plans and therefore not always well thought through and therefore do not exceed its purposes. This is of high importance as many studies gained have shown the importance of this sphere to patients and medical staff, therefore it is confirmed that designers need to cohere to the patients’ comfort, how they evaluate hospital environment aesthetics, what they think about their influence of their wellbeing, health and recovery. (Caspari et al. 2006). Therefore it is of main importance to have a clear, though through and well researched explanatory guidelines recorded in the strategic plans for the psychological impact of aesthetics in health care centres. (Caspari et al. 2006).

To enhance the well being of patients and evoke social interaction it’s important to consider design elements smartly (configuration, furniture, privacy etc) and the number of people per room (residential crowding) and exterior noise could cause psychological distress. (Evans, G. 2003).

When patients are not exposed to enough daylight they may also develop depressive symptoms and indirectly, the physical environment influences mental health. Supportive relationships, personal control and the healing process are all affected by the built environment. (Evans, G. 2003).

“Studies have demonstrated that exposure to multiple adverse physical and social conditions can combine to yield more negative mental health outcomes compared to exposure to individual environmental stressors.” (Evans, G. 2003).

In a society that questions everything and have resources available for almost every topic thinkable, it’s no surprise that we see hospitals changing in form and function. (Bell et al. 2004).

Research have gone further that just simply enhancing a disease-free body, but also mind. These shifting goals are, however, being encouraged through a diverse range of design features, encompassing modifications to the social, symbolic and physical spaces of hospitals. (Bell et al. 2004).

A lot of problems we’re dealing with regarding the impact health-care facilities have on individuals and the community, can be dealt with when implementing the patient-centred design process. (Ryan, K. & LaBat, K. 2009). This is broadly based on health care professionals (all staff at hospitals or any health care facility) and design professionals need to understands each other’s fields and worlds better, and of most importance the needs of the patients. (Ryan, K. & LaBat, K. 2009). These professionals all have the exact end goal, which is to better the health of the patient, so therefore will progress the best when working together in harmony with clear regular communication among all parties when targeting complex problems and finding the best solution for them. (Ryan, K. & LaBat, K. 2009).

Waiting spaces should also be researched thoroughly, as patients and their families/friends spend a majority of time in and form an opinion of the whole centre. (Hsieh, M., Lee, W. 2010).

A study have shown that most people prefer a shaded place to sit down which is visible to friends, but a sense of security is their main priority when regarding their pshycological need when concerning the waiting space. (Hsieh, M., Lee, W. 2010).

Fig.1. The Preference of Waiting Space (Gender). (Hsieh, M., Lee, W. 2010).

Fig.2. The Preference of Waiting Space (Age). (Hsieh, M., Lee, W. 2010).

Fig.3. The Preference of Waiting Space (Waiting Time). (Hsieh, M., Lee, W. 2010).

Fig.4. Psychological Effects of Six Open Spaces. (Hsieh, M., Lee, W. 2010).

Healthcare facilities are functionally built well (doors big enough for beds, spaces well thought through for equipment etc) but are psychologically lacking. (Ulrich, R. 2001).

Standpoint of marketing (facilities to patients) is the aim for any business model and should provide the best environment possible to patients, visitors and staff. (Ulrich, R. 2001).

Patients may suffer physical defects if they’re not happy with mental circumstances such as high blood pressure, anxiety, delirium, increase intake of pain drugs – research linked poor design works against well being of patients. (Ulrich, R. 2001).

Design should be ‘psychologically supportive’ and facilitate and foster the healing process, not just concern functional values, this should include a social support system, provide patients with a sense of control and focus on positive not negative distractions, evidence have shown that nature’s elements are a major contribution in these factors. (Ulrich, R. 2001).

Although this sounds very easy, every person’s views and experiences differ, therefore designers are responsible to solve these personal issues and predict certain conflict which leads to stressors. (Ulrich, R. 2001). Ulrich suggested that this study should include the staff and visitors views as well as the obvious patients’ preferences.

The designer should understand personal needs, and also leave the patient with some of the decision making possible in their own environment, suggested Ulrich.

Scientific research on these healing environments shouldn’t be considered lightly as it has the same goal as the designer which is to achieve a successful solution and promote wellness, but should be questioned regarding every patients’ situation is different from the next. (Ulrich, R. 2001).

Hospital noise pollution is an environmental and ambiant stress model affecting patients’ health and in order to reduce them influences sustaining against and towards it, is different socio political views and values, technological advances and motivation for controlling these noise hazards and barriers. (Topf, M. 2000).

Chapter 3. Methodology

My research approach was to go to the library first and take out books regarding psychological affects of design, specifically in a healing environment. Books found however was mainly on hospital design and architecture in general, and how the spaces relate to each other which was very interesting, but not really focusing on the problem stated.

I then went to a state hospital, Tygerberg hospital, as I want to mainly target state hospitals of my main approach of aiming and stating current issues concluding how it could be achieved by focusing on the psychological effect design elements have on the well being of patients and therefore better the healing process.

I got a few articles from journals from my lecturer, which was very significant and helpful as it involved all the different fields of study I am mainly approaching as stated. I would go on by studying other articles regarding the transport system between hospitals and patients’ homes as this is also very relevant as one of the problems why the state of state hospitals are faulting is because of a lack of beds for patients as there are unnecessary beds taken up because of the expensive of transport.

I will mainly focus on the impact of psychological aesthetic environment, but will research the given statements thoroughly, aiming certain problems in state hospitals and also find solutions for hospital design as a whole by inhancing patient-centred design, therefore all elements should be understood thoroughly contributing to the design of a state hospital.

Chapter 4. Conclusion

Current issues regarding the psychological effect the environment has on a patient is important because human health is very important to any country, individual, family or society. If it is possible to enhance the healing process through considering the environments interior design elements wisely and coherently, we should definitely do so to better wellness and human life.

The implications of designing for physiological need is that there should be more focus on the strategic plans when implementing a hospital design and focus on the emotional and physiological effects on patients and not only on the functionality of the design.

Theoretical framework by reviewing the relevant literature I have read so far all contribute to the stated problem. The direction I will take first from among the possible theoretical directions is studying the relationships of patient-centred design to universal design, user-centred design and the newer human-centred design.

Also as a conceptual framework, I will also explore why interdisciplinary approaches are needed for patient-centred design and how interdisciplinary collaboration between all professions involved and patients works to address the challenges of patient centred design.

The post Psychological Impact of Interior Design on Hospital Patients appeared first on mynursinghomeworks.

[ad_2]

Source link

Effect of Age on Occupational Stress

[ad_1]

CHAPTER ONE

INTRODUCTION

Stress, in general, has received widespread attention in the professional literature and popular press. This attention is due to the fact that excess stress has been known to have detrimental effects on an individual’s psyche. Moreover, stress has been a common factor affecting all aspects of life including interpersonal relationships, work, school, and family (Greenglass, 2002). It also represents a major health concern implicated in most of the top 10 causes of death in the United States, with the first being heart disease (World Health Organization, 1999). Coping, on the other hand, has been extensively researched as well. More importantly, coping strategies play a vital role in an individual’s physical and psychological well-being when confronted with challenges since these help alleviate the harmful stress effects individuals can face. Also, coping can be viewed as a goal managing approach that utilizes social resources such as co-worker and family support (Greenglass, 2002).

One form of stress that is commonly examined and is prevalent in today’s fast-paced society is occupational stress, also referred to as job or work stress. Such stress results in a variety of negative health outcomes, impacting the individual, the family and the organization at which the individual is employed. It stands to reason that a solid understanding of the causes and results of occupational stress can lead to improved health among workers, both young and old. According to Shultz and Adams (2006), the literature on aging in the workplace has been receiving more attention as the number of retirees reaching the age of Social Security and Medicare is on the rise. With more Americans finding their retirement incomes insufficient to keep up their standard of well being or simply wanting to supplement what they receive, the demographic shift of older workers continuing in the workplace has instigated a whole new area of research on aging and stress in the workplace and the coping mechanisms of the elderly. To retain an older workforce is to understand potential differences in how they, versus the young workforce, deal with occupational stress (Barnes-Farrell, 2005). Hence, this literature review contributes to the understanding of occupational stress and coping mechanisms by first reviewing the concept of stress, its causes and consequences, and established models within the literature that attempt to explain the relationships among individuals, environmental characteristics, their coping strategies, and stress. Furthermore, this paper will review the literature concerning coping and the impact of age and gender upon both coping mechanisms and the experience of occupational stress. It is imperative to understand how older workers deal with this type of stress compared to their younger counterparts since the past literature has failed to address the importance of how older workers uniquely cope with occupational stress and the existence of an interaction effect between age and gender in coping related to occupational stress.

General Stress

Background

The concept of stress incorporates two distinct ideas, stressors, which refer to environmental characteristics that cause adverse reactions in an individual, and strain, the actual adverse reaction to the stressors. While stress itself is most often associated only with the situation and the subsequent response, this conceptualization does not give consideration to mediating factors or individual susceptibility to the phenomenon. Therefore, stress is more aptly explained as a result or product of the interaction between individuals and their environment. As such, most stressful situations are not, in and of themselves stressful, but rather are defined that way by the unique individual involved in the situation. That is, “what one person may deem stressful, another individual may view as comfortable” (Bamber, 2006, p.5).

Universally, stress may also be viewed in a more positive manner. For example, McGowan, Gardner, and Fletcher (2006) characterized stress as an interaction between demands made upon an individual and the ability to respond to those demands. The outcome of this interaction need not be negative since there exists a term for positive stress known as eustress in which the stressor elicits a positive response depending on the positive psychological state of the individual. For example, eustress may be characterized by positive affect, meaningfulness, and hope in response to a particular stressor. Moreover, this type of stress helps an individual cope with stress in a healthy manner.

Larzarus and Folkman (1980) developed the Cognitive Theory of Stress and Coping. This theory of stress suggests that there exists a relationship that is transactional between individuals and their environment which can be strenuous, could exceed their resources and become threatening to their well-being. Judkins (2001) suggested that the emphasis of stress is on the individual’s perception or cognitive appraisal of its importance that takes into account the situational demands and individuals’ ability and resources for coping with that situation. Thompson (1992) used Lazarus and Folkman’s theoretical framework to further emphasize that stress is not an object in the world but it is a reaction of the organism to the events in the world. Thus, individuals experience stress based on how they react to life events such as stress at work.

Occupational Stress

As occupational stress has become a common fixture of the lives of millions of Americans, consequences of this type of stress for both employees and organizations has received growing interest. Occupational stress is related to a range of factors both external and intrinsic to the workplace. Intrinsic factors include work overload or under-load (i.e., boredom), shift work, long hours, travel requirements, larger work environments, and poor physical work settings. Other factors associated with it include role ambiguity, role conflict, mistrust or envy of coworkers, job insecurity, downsizing, poor communication among employees, low recognition by superiors, and low decision authority (Biron, Ivers, Brun,& Cooper, 2006; Danna & Griffin, 1999; Sexton, Teassley, Cox, & Carroll, 2007). External factors may be thought of as factors beyond the control of the individual. For example, a company’s decision to merge with another company for profit and does not take into account suggestions or concerns of their own employees. Occupational stress occurs when an individual experiences an overload of stressors stemming largely from the occupational environment. Bridger, Kilminster, and Slaven (2006) described a workplace stressor as an aspect related to the work environment which poses demands that the individual is not ready to comprehend, and as a result causing strain. So, a strain is caused by a stressor. For example, in attempting to meet an important deadline the employee is unsure about meeting and hence the employee may feel over-worked and skill-deficient. Past literature has specifically focused on researching domains that include the physical characteristics of the occupational climate such as heat, crowding, and noise and even the personal characteristics of workers within the occupational environments that include their coping styles, strong beliefs about avoiding stress, and cognitive capacities (Byrne & Espnes, 2008).

Sparks and Cooper (1997) argue that occupational stress can result from a combination of work stressors. Work relationships and interactions between supervisors and co-workers can be one source of both strain and support. For example, if employees considered their supervisors to be hostile towards them, they experienced more pressure at work than those employees who had supportive bosses. Moreover, if employees had brief interactions with their supervisors without having a sufficient supervisor-employee time, employees might think that their supervisors are taking them for granted and unsupportive of their work. Cartwright and Cooper (1997) argued that another potential stressor can be a lack of job security. If an employee working in a company is uncertain of his or her job position, it may affect the overall work productivity and satisfaction of the employee. The reason is that this employee might constantly be under the stress of fear of job loss. Additionally, negative performance appraisals and persistent role ambiguity can be detrimental to employee well-being. Moreover, over-promotion such as frustration of having reached a career ceiling can make stress unbearable. In other words, an employee who has taken a leadership role or has been laden with many responsibilities by the company might feel over-worked and worn out.

Cooper and Lewis (1994) suggested the fact that the work-family interface can also be a likely stressor for employees coping with occupational stress. Experiencing work overload, lack of role clarity, and a hostile environment at work may affect the home environment since the employee brings these problems home with him and thus can strain relationships with family members. Danna and Griffin (1999) also agreed with Cooper and Lewis that factors related directly to the work environment are not the only potential causes of stress but the link between home and work could also present problems. Difficulties in managing the dual environments, particularly among two-income couples or individuals experiencing a personal crisis, could contribute to occupational stress.

Other research suggests that individuals with certain personality traits are more prone to occupational stress. For example, the “Type D” personality is linked to introversion and neuroticism. Oginska-Bulik (2006) reported that individuals with this personality type were more likely to perceive their work environments as stressful, due to lack of rewards, control, and responsibility, and would experience greater frequency of burnout in the form of emotional exhaustion, and demonstrate mental health disorders, including anxiety, insomnia, and depressive symptoms. Other researchers have stated that individuals with high positive affect and low negative affect demonstrate lower levels of blood pressure in response to stress than do individuals with both a high positive and negative affect (Norlander, Bood, & Archer, 2002).

The consequences of occupational stress can range in severity from mild to severe and impact both professional and personal lives. In one study of university staff members, participants identified professional aspects negatively impacted by stress such as job performance, interpersonal work relations, commitment to the organization, and extra-role performance, the latter which refers to participation in extra tasks in the workplace or willingness to work extra hours. As previously mentioned, occupational stress can also spillover into one’s personal life. Negative consequences within this domain include physical health problems, such as weight loss, fatigue, back pain; psychological health problems such as burnout, anger, irritability, frustration, and feeling overwhelmed; as well as strained family and personal relations (Gillespie, Walsh, Winefield, Dua, & Stough, 2001). Several models on occupational stress have been proposed and have influenced contemporary organizational stress research and they are discussed in the following sections.

Theoretical Models of Occupational Stress

The Demand-Control Model of Occupational Stress

Developed by Karasek (1979), the Job Demand Control model explains the relationships among job demand, job control, and psychological strain in the workplace. Job demands are described as the amount of workload experienced by a worker, while job controls refer to a worker’s sense of autonomy in the workplace and the ability to control the response to job duties and how to complete them (Karasek & Theorell, 1990). An additional component, support, was added to this model in the early 1990s by other researchers and this component consisted of the instrumental and emotional assistance provided generally by immediate supervisors to the work. It is also a theoretical model that suggests psychological strain as being a result of a combination of factors. Strain from a job environment is influenced by job demands and by the amount of autonomy workers perceived they have in facing these work demands (Tansey, Mizelle, Ferrin, Tachopp, & Frain, 2004). These facets related to the work situation initiate conflicts and demands that place workers in a position dominated by stress. In other words, the interaction of high work demand and low job control can trigger the onset of occupational stress. The main theme of the Job Demand Control model is that job control is able to protect against the detrimental effects of high work demands on psychological strain.

The Job Demand Control model consists of four dimensions, each incorporating various levels of job demand and control. The first of the three dimensions, termed “High Strain Jobs”, suggests that the adverse effects of psychological strain, including anxiety, depression, fatigue, and physical illness occur when job control is low and job demand is high. “In situations with high levels of stress or strain, the resulting arousal becomes damaging when the worker has little to or no control over his environment and the constraints that restrict how he can respond to the strain” (Karasek & Theorell, 1990, p. 31).The second dimension of the model, known as “Active Jobs”, is characterized by high levels of both psychological demand and control. In this particular situation, workers have the liberty to use their talents and skills to mitigate negative psychological stressors. “The energy from these stressors is then translated into action through active problem solving, which results in little psychological disturbance and average amounts of psychological strain” (p. 35). For example, jobs of heart surgeons where psychological pressures such as operating on the heart and pressure to perform the operation on time is common practice, however, they have some decision latitude to make decisions in saving the life of the patient.

Karasek and Theorell (1990) described “Low Strain Jobs” as the third type of situation that is defined by a small number of psychological demands and high levels of control. “Such jobs are associated with relaxation and leisure and low levels of psychological strain and physical illness. There are a few challenges in the workplace, and the worker possesses the ability to respond to any challenges that may appear” (p.36). An example of low strain jobs may be monitor technicians who monitor patient heartbeats and only report to the nurses if they see a spike in the patient’s rhythm. Other than that, the job itself is comfortable because all you do is sit in front of the monitor until an abnormal heart rhythm is discovered.

The final component of the Job Demand Control model is “Passive Jobs”, distinguished by both low levels of demand and control. In this type of situation, the authors contended that the “worker’s skills and abilities eventually wither, resulting in negative learning, loss of skills, and low levels of leisure and political activity outside of the work environment” (p. 37). Motivation and productivity are threatened when one is incapable to fully satisfy one’s desire to implement one’s own ideas for improving the work environment or when a job is less challenging. “Jobs with low levels of both demand and control are also associated with average levels of psychological strain and illness” (p. 38). An example of a passive job might be janitorial duties. In this type of job, an individual is not challenged enough to do something about the work because the work requires minimal special knowledge or skills with little discretion of how to complete the work.

Mixed support for the Job Demand Control model exists in the literature surrounding occupational stress. Dollard, Winefield, and De Jong (2000) utilized the model to investigate differences in self-reported levels of job strain and productivity among different occupational groups, contending that occupational stress was primarily due to environmental factors rather than personal characteristics. The authors collected data on negative affectivity, work environment, emotional strain, and productivity. Findings indicated that a negative work environment significantly correlated with job strain. The level of job demand correlated positively with emotional exhaustion, depersonalization, and personal accomplishment, and negatively with job satisfaction. Job control, however, positively correlated with the latter two factors, while social support correlated negatively with emotional exhaustion and depersonalization.

Rusli, Edimansya, and Naing (2008) also utilized the Job Demand Control model to investigate the relationship between job demand, job control, social support, stress, anxiety, depression, and quality of life. They mentioned that the quality of life was predicted by increased social support and less social support led to increased health risks. Other results demonstrated a relationship between social support and job control and demand. Results indicated that job demand was reciprocally related to environmental work conditions and job control was positively correlated with social relationships in the workplace. The researchers concluded that stress, anxiety, and depression mediated the relationship between job demand and quality of life. An additional result from this study, which adds an interesting perspective to the Job Demand Control model, was that job control, stress, anxiety and depression increased with increasing age of the worker.

Another study conducted by Tarris and Feij (2004) addressed occupational stress by presenting findings that did not necessarily support Karasek and Theorell’s model. In this study, the researchers investigated how job demands, control, and strain impact working aspirations of young workers with respect to the motivation to learn from more experienced colleagues and supervisors. The data was collected from younger workers over a period of two years. Cross-sectional results supported each of the four tenets of the Job Demand Control model, assuming that reduced job strain translated into increases in motivation to learn; however, some of these results did not hold true over time. For example, the authors demonstrated that increased job demand and control led to increased learning in the short term, but no increases in learning over the long term. Within these conditions over time, the level of strain decreased, likely due to the opportunity to utilize new strategies in dealing with strain. These results as with the study conducted by Rusli et al. (2008), suggest that changes may occur over time which cannot be explained in full by the Job Demand Control model.

While the previous two studies involved younger workers with a mean age of 26 who were followed for a length of time, Totterdell, Wood, and Wall (2006) followed a group of workers for six months whose mean age was 48 years old. The purpose of their study was to investigate how the Job Demand Control model applied to changes within the individual with respect to work characteristics and strain over time. The researchers collected data concerning optimism, emotional stability, problem-solving demands, time and method control, emotional support, and job-related stress. Results suggested that while demands, control, and support all affected job strain, they did so in an independent manner rather than interactively, which is contrary to the model. However, when considering levels of personal optimism, interaction between demands and control was observed. For example, pessimists experienced greater levels of strain during periods of high demand and low control than did optimists. This study suggests that the components of the Job Demand Control model were affected by extraneous factors, such as individual emotional characteristics, although it provided no clue as to whether or not younger workers would yield similar results. In addition, studies done on job demand control model have looked more at psychological work demands of employees in general without paying close attention to the types of work demands that are stressful to workers from various groups (e.g., older versus younger workers). Yet, a recent study by Shultz, Wang, Crimmins, and Fisher (2009) did find some support of interactive effects of demand and controls for older workers, but not for younger workers. Specifically, they found that for the problem solving demand, only one job control mechanism such as having plenty of time to complete a work goal buffered a stressful response for younger workers while all job control mechanisms demonstrated buffering effects against job stress related to different job demand types for older workers.

The Effort/Reward Imbalance Model of Occupational Stress

A second model of occupational stress is the Effort/Reward Imbalance model or ERI, which adds a more subjective dimension to the Job Demand Control model. This model asserts that occupational status and successful role performance provide the means to increase self-esteem. However, both the individual’s efforts and the rewards obtained in response to those efforts, such as money or career opportunities are dependent on the psychological benefits associated with work. An individual who puts forth great efforts, whether due to extrinsic motivation such as job obligation and demands; intrinsic motivation like employee over-commitment to strive to do the best work possible on the job or a combination of both, but receives a few rewards experiences emotional stress and negative health consequences (Calnan, Wainwright, & Almond, 2000). Over-commitment, a third dimension of the model, may be a risk factor that impacts the balance between efforts and rewards (Niedhammer, Chastang, David, Barouhiel, & Barrandon, 2006).

Although this model can serve alone as a useful framework for understanding the impact of psychosocial factors on mental and physical health outcomes, it is further strengthened when considered in conjunction with the Job Demand Control model. For example, Niedhammer et al. (2006) investigated the health outcomes of workers in a company that distributed publications. In light of the Job Demand Control model, results indicated that, among male workers, job strain served as a risk factor for depressive symptoms, likely due to low levels of control and decision-making authority among such workers. In addition, women who experienced low levels of social support, an additional component of the Job Demand Control model, were at a greater risk for depressive symptoms. When viewed in light of ERI model, the data indicated that, among male workers, this imbalance was associated with depressive symptoms and psychiatric disorders, possibly due to low rewards and job instability. Taken together, the two models provided a more well-rounded picture of the association between work-related factors, including strain, social support, and an imbalance between effort and reward, upon the occurrence of depressive symptoms which is a negative health outcome. Moreover, Siegrist, Dagmar Starke, Chandola, Godin, Marmot, Niedhammer, and Peter (2004) agree with Niedhammer et al about ERI by suggesting that the consequences of occupational stress are related to the balance between the amount of effort an employee puts in the job and the level of rewards they receive such as money, self-esteem, and job security that can be gained from the effort put forth. The model further argues that those who are excessively motivated to be committed to their jobs may expose themselves to high work demands or they might exaggerate their efforts beyond what is required for a particular job. For example, employees might flatter their supervisors to make them feel worthy of them in order to receive a type of monetary reward.

Depressive symptoms are but one of many negative health outcomes that could occur when perceived effort does not correspond with perceived rewards (Martin-Fernandez, Gomez-Gascon, Beamud-Lagos, Cortes-Rubio, & Alberquilla-Menendez-Asenjo, 2007). Preckel, Meinel, Kudielka, Huag, and Fischer (2007) reported on the effects of ERI upon the health outcomes of skilled workers within an aircraft manufacturing plant. Results indicated that over-commitment, a third dimension to this model, increased the risk of poor health outcomes, including self-reported health-related quality of life factors such as physical functioning; freedom from pain; vitality; vital exhaustion, characterized by loss of energy, trouble sleeping, irritability, and apathy; depressed mood; and negative affectivity. Another research study suggested that in a nursing profession, burnout and the desire to leave that profession, positively correlated with imbalances between efforts and rewards (Hasselhorn, Tackenberg, & Peter, 2004). However, the notion of “rewards” is subjective in nature, with some individuals placing higher value on certain rewards that may be deemed unimportant to others.

Voltmer, Kieschke, Schwappach, Wirsching, and Spahn (2008) attempted to further clarify the relationship between efforts/rewards and health outcomes by categorizing individuals according to correlated psychosocial factors and outcomes. In their study of medical students and physicians, the authors gathered data concerning professional commitment, resistance to stress, and emotional well-being. Based upon the specific health risks that correlated with each of these work-related behaviors, researchers identified four categories of individuals. Type “G” or the Healthy Ambitious Type individuals are ambitious at work but remain capable of maintaining a healthy emotional distance from the environment. Such behaviors correlated with resistance to stress and positive emotions. The second type of individual, Type “S” or the Unambitious Type, demonstrated lower commitment to work and a higher sense of detachment from the work environment. However, individuals in this group also scored well on measures of inner balance, satisfaction with life, and social support, indicating an overall sense of commitment with their personal lives. Like Type G individuals, members of this group did not experience any significant negative health outcomes; however, the lack of motivation was identified as one negative outcome.

The remaining two groups of individuals demonstrated negative health outcomes related to behaviors at work. “Type A” individuals, described as excessively ambitious, were characterized by excessive commitment to their work and difficulty maintaining an emotional distance from that environment. Health outcomes for these individuals included higher risk for coronary artery disease and myocardial infarction. “Type B” individuals, defined as “resigned” demonstrated low scores for professional commitment, emotional distancing, and coping skills. Outcomes for these individuals included greater risk for mental instability, dissatisfaction with work and life, and limited social support, all of which are related to job burnout. This study clearly illustrates the main premise of the Effect/Reward Imbalance model in that psychosocial factors related to the work environment serve as risk factors for physical and mental health outcomes.

Person-Environment Fit

The Person-Environment Fit Model or P-E Fit explains that positive outcomes occur when individuals are closely matched to their work environment with respect to career-relevant personality type (Carless, 2005). Since individuals are often unique in regards to personal qualities, abilities, coping skills, and needs, different individuals may perceive the same job in different ways. What one person views as being demanding and stressful, another employee may regard the same situation as challenging and exciting. Thus, based upon this theory, it is important to closely match an employee’s unique characteristics with specific qualities of jobs. Occupational stress is lessened when an appropriate match exists between the work environment and the individual; however, when a poor match exists, occupational stress may be quite high (Bamber, 2006).

According to the literature, several different types of fit occur within the realm of P-E Fit: these include Person-Organization Fit, Person-Job Fit, and Person-Innovation Fit. Carless (2005) described Person-Job fit as match between an individual’s knowledge, skills, and abilities and job or personal demands and what the job provides. When these two dimensions closely match, positive outcomes occur, such as low attrition rate, high work performance, low turnover, and high job satisfaction. Person-Organization fit refers to the similarity that exists between the individual’s and the organization’s wants, needs, and characteristics. Individuals who perceive that an organization closely mirrors their own values, personality, attitudes, and goals are more likely to seek out and accept employment there.

Person-Innovation fit, a more recent development based upon the Person-Environment fit model, explains how people respond to innovations and predicts the outcomes of innovation implementation on an individual level. Values and abilities are two distinct attributes associated with the concept of innovation. The values attribute refers to the perceived values and goals underlying the innovation, while the abilities attribute refers to skills, knowledge, and expertise needed for successful implementation of innovation. Past research has shown that different types of person-innovation fit predict different types of individual outcomes. To be specific, job satisfaction, well-being and low stress is closely correlated with value-fit. While the value-fit correlates with affective outcomes, abilities-fit correlates with behavioral outcomes such as the use of technology or innovation and innovation implementation efforts (Choi & Price, 2005).

In addition to the characteristics associated with these three types of fit, including knowledge, skills, abilities, wants, needs, and values, another variation on the Person-Environment fit focuses upon an individual’s interests. The Interest-Vocation fit suggests that a person’s interests play a role in job satisfaction equal to the role played by skills and abilities. Furthermore, these factors are closely related, as research indicates that among some individuals, Interest-Vocation fit positively correlates with cognitive ability. More specifically, among individuals whose interests lie mostly in the artistic domains, high cognitive ability positively correlates with successful Interest-Vocation fit. Individuals with high cognitive ability whose interests are characterized as conventional or realists were less likely to participate in vocations that matched their interests than their lower-cognitive ability counterparts (Reeve & Heggestad, 2004). In spite of the support found in the literature for the applicability of Person-Environment fit model in predicting factors such as work stress, criticism does exist. Bright and Pryor (2005), for example, discussed a number of these criticisms found in the literature. According to these authors, one problem with the model is that the interaction between the person and the environment is characterized in terms of traits. These traits, along with the concepts of “persons” and “environment” represent static ideas that do not reflect the changing nature of today’s work environment. Other problems with this model include inadequate conceptualization and measurement within the literature with regards to the terms “person” and “environment” and the failure to incorporate the complexities and uncertainties associated with a changing job environment into the model.

Coping

Coping with stress has become a crucial area for research in reducing workers’ perceived level of stress. The focus on coping and ways in which it can reduce the levels of stress and promote a quality of life that is healthy has received abundant attention. According to Folkman and Laza

The post Effect of Age on Occupational Stress appeared first on mynursinghomeworks.

[ad_2]

Source link

Research Proposal: X-ray Images Enhancement

[ad_1]

INTRODUCTION

1.1 Digital image

A digital image is essentially a two-dimensional array of light-intensity levels, which can be denoted by f(x,y), where the value or amplitude of f at spatial coordinates (x,y) gives the intensity of the image at the point. The intensity is a measure of the relative “brightness” of each point. The brightness level is represented by a series of discrete intensity shades from darkest to brightest, for a monochrome (single color) digital image. These discrete intensity

shades are usually referred to as the “gray levels”, with black representing the darkest level and white, the brightest level. These levels will be encoded in terms of binary bits in the digital domain, and the most commonly used encoding scheme is the 8-bit display with 256 levels of brightness or intensity, starting from level 0 (black) to 255 (white). The digital image can therefore be conveniently represented and manipulated as an N (number of rows) x M (number of columns) matrix, with each element containing a value between 0 and 255 (for an 8-bit monochrome image), i.e.

f(0,0) f(1,0) . . f(0,M-1)

f(x,y)= f(1,0) f(1,1) . . f(0,M-1) ,where 0 ≤ f(x,y) ≤255.

. . . . .

f(N-1,0) f(N-1,1) . . f(N-1,M-1)

Different colors are created by mixing different proportions of the 3 primary colors: red, green and blue, i.e. RGB for short. Hence, a color image is represented by an N x M x 3 three-dimensional matrix, with each layer representing the gray-level distribution of one primary color in the image.

Each point in the image denoted by the (x,y) coordinates is referred to as a pixel. The pixel is the smallest cell of information in the image. It contains a value of the intensity level corresponding to the detected irradiance. Therefore, the pixel size defines the resolution and acuity of the image seen. Each individual detector in the sensor array and each dot on

the LCD (liquid crystal display) screen contributes to generate one pixel of the image. There is actually a physical separation distance between pixels due to finite manufacturing tolerance. However, these separations are not detectable, as the human eye is unable to resolve such small details at normal viewing distance (refer to Rayleigh’s criterion for resolution of diffraction-limited images [1]).

For simplicity, digital images are represented by an array of square pixels. The relation between pixels constitutes the information contained in an image. A pixel at coordinates (x,y) has eight immediate neighbors which are a unit distance away:

(x-1, y-1)

(x-1, y)

(x-1, y+1)

(x, y-1)

(x,y)

(x, y+1)

(x+1, y-1)

(x+1, y)

(x+1, y+1)

Figure 1: Neighbors of a Pixel. Note the direction of the x and y

coordinates used.

Pixels can be connected to form boundaries of objects or components of regions in an image when the gray levels of adjacent pixels satisfy a specified criterion of similarity (equal or within a small difference). The difference in the gray levels of two adjacent pixels gives the contrast needed to differentiate between regions or objects. This difference has to be of a certain magnitude in order for the human eye to identify it as a boundary.

1.2 Image processing

Image processing Is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it [2].

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of Multidimensional Systems.

Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement.

Digital Image Processing consists of several steps. The first step is image acquisition which is acquiring a digital image. When a digital image has been obtained, the next step is important for digital images that is preprocessing. The key function of preprocessing stage is to improve the image in ways that increase the chances for success of the other processes, produce better image quality and reducing noise. The next stage deals with the segmentation of image. Image segmentation partitions input Image into its constituent parts or objects.

The next step is description and representation. Representation is the raw data transformation into a descriptive form suitable for computer processing. Description deals with feature extraction those results. descriptions are necessarily task specific. The last step is recognition. Recognition is the process which assigns a label to an object based on the information of the object. Interpretation assigns meaning to recognized objects.

1.3 Image preprocessing:

Image pre-processing is the term for operations on images at the lowest level of abstraction. These operations do not increase image information content but they decrease it if entropy is an information measure [3] [4] The aim of pre-processing is an improvement of the image data that suppresses undesired distortions or enhances some image features relevant for further processing and analysis task. Image pre-processing use the redundancy in images. Neighboring pixels corresponding to one real object have the same or similar brightness value. If a distorted pixel can be picked out from the image, it can be restorted as an average value of neighboring pixels. Image pre-processing methods can be classified into categories according to the size of the pixel neighborhood that is used for the calculation of a new pixel brightness.

image enhancement is necessary to improve the visual appearance of the image or to provide a better transform representation for future automated image processing such as image analysis, detection,segmentation and recognition [5][6]. To discern the concealed but important information in the images, it is deemed necessary to use various image enhancement methods such as enhancing edges, emphasizing the differences, or reducing the noise .

In this thesis, it will be applied one of enhancement methods on x-ray images to increase both the accuracy and the interpretability of the data.

We know that digital images have enveloped the complete world. The digital cameras which are main source of digital images are widely available in the market in cheap ranges. Sometimes the image taken from a digital camera is not of quality and it required some enhancement. There exist many techniques that can enhance a digital image without spoiling it.

The enhancement methods can broadly be divided in to the following two categories:

1. Spatial Domain Methods

2. Frequency Domain Methods

In spatial domain techniques , we directly deal with the image pixels. The pixel values are manipulated to achieve desired enhancement. In frequency domain methods, the image is first transferred in to frequency domain. It means that, the Fourier Transform of the image is computed first. All the enhancement operations are performed on the Fourier transform of the image and then the Inverse Fourier transform is performed to get the resultant image. These enhancement operations are performed in order to modify the image brightness, contrast or the distribution of the grey levels. As a consequence the pixel value (intensities) of the output image will be modified according to the transformation function applied on the input values.

Image enhancement is applied in many field where images are ought to be understood and analyzed. For example, analysis of images from satellite, medical image analysis, etc.

BACKGROUND

The aim of image enhancement is to improve the interpretability of information in images for human viewers, or to provide better input for other automated image processing techniques.IE has contributed to research advancement in a various fields. Some of the areas in which IE has wide application are mentioned below.

1. Medical imaging [7], [8], [9], uses IE techniques for reducing noise and sharpening details to improve the visual representation of the image. Since minute details play a critical role in diagnosis and treatment of disease, it is essential to highlight important features while displaying medical images. This makes IE a necessary tool for viewing anatomic areas in MRI, ultrasound and x-rays to name a few.

2. In forensics [10], [11], IE is used for identification, evidence gathering and surveillance. Images obtained from fingerprint detection, security videos analysis and crime scene investigations are enhanced to help in identification of culprits and protection of victims.

3. In atmospheric sciences [12], [13], IE is used to reduce the effects of haze, fog, mist and turbulent weather for meteorological observations. It helps in detecting shape and structure of remote objects in environment sensing [14].

4. Astrophotography faces difficulties due to light and noise pollution that can be minimized by IE [15]. For real time sharpening and contrast enhancement several cameras have in-built IE functions. Moreover, numerous softwares [16], [17],allow editing such images to provide better and vivid results.

5. IE techniques has been used In oceanography the study of images reveals interesting features of water flow,sediment concentration, geomorphology and bathymetric patterns to

name a few.These features are more clearly observable in images that are enhanced to overcome the problem of moving targets, deficiency of light and obscure surroundings.

6. IE techniques when applied on pictures and videos help the visually impaired in reading small print, using computers, television and face recognition [18]. Several studies have been conducted [19], [20], that highlight the need and value of using IE for the visually impaired.

7. Virtual restoration of historic paintings and artifacts [21] often employs the techniques of IE in order to reduce stains and crevices. Color contrast enhancement, sharpening and brightening are just some of the techniques used to make the images vivid. IE is a powerful tool for restorers who can make informed decisions by viewing the results of restoring a painting beforehand. It is equally useful in discerning text from worn-out historic documents [22].

8. E-learning field, IE is used to clarify the contents of chalkboard as viewed on streamed video, it helps students in focusing on the text and improves the content readability [23]. Similarly, collaboration [24] through the whiteboard is facilitated by enhancing the shared data and diminishing artifacts like shadows and blemishes.

9. Numerous other fields including, meteorology, microbiology, biomedicine, bacteriology, climatology, microbiology, law enforcement, etc., benefit from various IE techniques. Basically, these benefits are not limited to professional studies and businesses but extend to the common users who employ IE to cosmetically enhance and correct their images.

Inspired by the use of image enhancement in a multitude of fields, this research aims at using these techniques on x-ray images, where The raw data obtained directly from X-ray acquisition device may yield a relatively poor image quality representation.

RESEARCH PROBLEM

The x-ray image enhancement problems can be classified into three main problems:

(1) X-ray images (especially thorax images) include different regions containing details. Both sharp and soft transitions between the regions and details may exist in all visual spans. When all details are enhanced to the same extent, the relatively significant details cover most of the visual span and prevent the visibility of relatively less significant details.

(2) Since X-ray images are used for diagnostic purpose, the image enhancement must not cause misleading information, making a structure looking more or less significant than it is must be avoided.

(3) Data loss is not desirable in diagnostic images. Therefore, the noise attenuation procedure must not remove any visual information.

Another problem with X-ray (especially thorax) images is the risk of incorporating a priori information about the visual structures of the image for enhancement and denoising purpose. Unlike the common images, X-ray images are rendered volume data and the transitions between the same structures may be smooth or sharp depending on the angle.

RESEARCH QUESTIONS

RQ1: Is it possible to enhance x-ray images without losing important details?

RQ2: Is the proposed methods will lead doctors to get a right diagnosis?

RESEARCH OBJECTIVES

The objectives of the study are:

  • To investigate Image enhancement techniques to improves the qualities of an x-ray image .
  • To propose a new frame work for x-ray images enhancement.
  • To provide noise reduction capabilities, with considerably less blurring by using effective filter which is median filter.
  • To propose method which will increase the sharpening of x-ray images.
  • To design x-ray image enhancement system based on proposed methods for better diagnosis.

SIGNIFICANT OF STUDY

The goal of image enhancement technique is to improve a characteristics and obtain better quality of an image, such that the resulting image is better than the original image.

The enhancement operations have an important potential in obtaining as much easily interpretable diagnostic information as possible with reasonable absorbed doses of ionising radiation. Due to the increasing usage of high precision and resolution images with a limited number of human experts, the computational efficiency of the denoising and enhancement becomes important.

RESEARCH SCOPE

This research focus on enhancement of x-ray images.

The proposed system will work on x-ray images, whereas the x-ray images have many problems in enhancement operation, because, X-ray images are used for diagnostic purpose, the image enhancement must not cause misleading information, making a structure looking more or less significant than it is must be avoided.

In this research a good enhancement method will used for a better quality of x-ray images.

CONTRIBUTION

The raw data obtained directly from X-ray acquisition device may yield a relatively poor image quality representation. For right diagnosis, we will use enhancement techniques to obtain bitter quality images .

RESEARCH METHODOLOGY

Image enhancement is improving the perception of information in images for human viewers and providing better input for other automated image processing techniques. The main objective of image enhancement is to modify features of an image to make it more suitable for a given task. We will introduce a great deal of subjectivity into the choice of image enhancement methods. There exist many techniques that can enhance a digital image without spoiling it.

Proposed method consists of three steps:

1. Apply Contrast Limited Adaptive Histogram Equalization (CLAHE) on original x-ray image.

2. Apply median filter on contrasted image

3. Create Negative of an Image

9.1 Contrast Limited Adaptive Histogram Equalization (CLAHE)

Adaptive histogram equalization is one of a computer image processing technique .It is used to improve contrast in images. CLAHE is different from ordinary histogram equalization in the respect that the adaptive method computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute the lightness values of the image. Ordinary histogram equalization purely uses a single histogram for an entire image [2].

Adaptive histogram equalization is an image enhancement technique capable of improving an image’s local contrast, bringing out more detail in the image. However, it also can produce significant noise. Contrast limited adaptive histogram equalization is a generalization of adaptive histogram equalization, also known as CLAHE, was developed to address the problem of noise amplification.

The noise problem associated with AHE can be reduced by limiting contrast enhancement specifically in homogeneous areas. These areas can be characterized by a high peak in the histogram associated with the contextual regions since many pixels fall inside the same gray level range. The Contrast Limited Adaptive Histogram Equalization (CLAHE) limits the slope associated with the gray level assignment scheme to prevent saturation. This process is accomplished by allowing only a maximum number of pixels in each of the bins associated with the local histograms. After “clipping” the histogram, the clipped pixels are equally redistributed over the whole histogram to keep the total histogram count identical. The CLAHE process is summarized in Table 1.

The clip limit is defined as a multiple of the average histogram contents and is actually a contrast factor. Setting a very high clip limit basically limits the clipping and the process becomes a standard AHE technique. A clip or contrast factor of one prohibits any contrast enhancement, preserving the original image.

1. Obtain all the inputs:

Image

Number of regions in row and column directions

Number of bins for the histograms used in building image

transform function (dynamic range)

Clip limit for contrast limiting (normalized from 0 to 1)

2. Pre-process the inputs:

Determine real clip limit from the normalized value.

If necessary, pad the image (to even size) before splitting

into regions.

3. Process each contextual region (tile) thus producing gray level

mappings:

Extract a single image region.

Make a histogram for this region using the specified number

of bins.

Clip the histogram using clip limit.

Create a mapping (transformation function) for this region.

4. Interpolate gray level mappings in order to assemble final

CLAHE image:

Extract cluster of four neighboring mapping functions.

Process image region partly overlapping each of the

mapping tiles.

Extract a single pixel, apply four mappings to that pixel, and

interpolate between the results to obtain the output pixel.

Repeat over entire image.

Table 1

9.2 Median Filter

We will use this kind of filter on contrasted x-ray image. In signal processing, it is often desirable to perform some kind of noise reduction on an image or signal. The median filter is one of a nonlinear digital filtering techniques, often used to remove noise from images .Noise reduction is a pre-processing step to improve the results of processing (such as, edge detection on an image). Median filtering is used widely in digital image processing because under certain conditions, it preserves edges whilst removing noise.

The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the “window”, which slides, entry by entry, over the entire signal. For 1D signals, the most obvious window is just the first few preceding and following entries, whereas for 2D (or higher-dimensional) signals such as images, more complex window patterns are possible (such as “box” or “cross” patterns). Note that if the window has an odd number of entries, then the median is simple to define: it is just the middle value after all the entries in the window are sorted numerically. For an even number of entries, there is more than one possible median [2].

Advantages of median filte

  • They provide excellent noise reduction capabilities, with considerably less blurring than linear smoothing filters of similar size.
  • Median filters are particularly effective in the presence of both bipolar and unipolar impulse noise.
  • Median value must be one of the pixel values present in the Neighborhood. So median does not create new unrealistic pixel value.

9.3 unsharp mask

An “unsharp mask” is actually used to sharpen an image, contrary to what its name might lead you to believe. Sharpening can help you emphasize texture and detail, and is critical when post-processing most digital images. Unsharp masks are probably the most common type of sharpening, and can be performed with nearly any image editing software (such as Photoshop). An unsharp mask cannot create additional detail, but it can greatly enhance the appearance of detail by increasing small-scale acutance

The sharpening process works by utilizing a slightly blurred version of the original image.  This is then subtracted away from the original to detect the presence of edges, creating the unsharp mask (effectively a high-pass filter). Contrast is then selectively increased along these edges using this mask– leaving behind a sharper final image.

9.4. ImageJ

ImageJ is a public domain Java image processing and analysis developed at the National Institutes of Health. It runs, either as an online applet or as a downloadable application, on any computer with a Java 1.5 or later virtual machine. Imagej can read many image formats including TIFF, GIF, JPEG,BMP, DICOM, FITS and ‘raw’. It supports ‘stacks’ (and hyperstacks), a series of images that share a single window. It has many tools and menu commands foe easy use. we will use this environment to apply the proposed algorithms on x-ray images.

9.5 Research Database

To evaluate our proposed system ,we need a database. The free database will be take from http://www.imageprocessingplace.com website. It has more than 50,000 x-ray images in different parts of body .the standard of database is JPEG format.

9.6 Evaluation

Almost every X-ray images need to be improved to facilitate access or extract the information important. At the same time this process is sensitive because x-ray images is one the ways for disease diagnosis .adding information to x-ray images leads to the wrong diagnosis. To evaluate our work, after finish the required Implementation will send output data and input data to the experts (doctors and X-rays photographer) .They will give their report on this work.

LITERATURE REVIEW

1. CLASSIFICATION OF IMAGES:

1.1 Intensity Images

An intensity image is a data matrix whose values have been scaled to represent intensities. When the elements of an intensity image are of class unit 8, or class unit 16, they have integer values in the range [0, 255] and [0, 65535].respectively. If the image is of class double, the values are floating-point numbers. Values of scaled, class double intensity images are in the range [0, 1] by convention [25].

1.2 Indexed Images

Array of class logical, unit 8, Unit 16, single, or double whose pixel values are directed indices into a color map. The color map is an m-by-3 array of class double. For single or double arrays, integer values range from [1, p]. For logical, unit8, or unit 16 arrays, values range from [0, p-1]. An indexed image consists of an array and a color map matrix. The pixel values in the array are directed indices into a color map. By convention, this documentation uses the variable name X to refer to the array and map to refer to the color map [25].

1.3 Binary Images

Binary images have a very specific meaning in MATLAB. In a binary image, each pixel

assumes one of only two discrete values: 1 or 0, interpreted as black and white,respectively. A binary image is stored as a logical array. Thus, an array of 0s and 1s

whose values are of data class, say, unit8, is not considered a binary image in MATLAB

[25].

Figure 2. Binary image

1.4 Grayscale Images:

A grayscale image (also called gray-scale, gray scale, or gray-level) is a data matrix whose values represent intensities within some range. MATLAB stores a grayscale image as an individual matrix, with each element of the matrix corresponding to one image pixel. By convention, this documentation uses the variable name I to refer to grayscale images. Array of class unit8, unit16, int16, single, or double whose pixel values. For single or double arrays, values range from [0, 1]. For unit8, values range from [0, 255]. For unit16, values range from [0, 65535]. For int16, values range from [-32768, 32767] [25].

Figure 3. Grayscale image

1.5 True color Images

A true color image is an image in which each pixel is specified by three values one each for the red, blue, and green components of the pixel‘s color. MATLAB store true color images as an m-by-n-by-3 data array that defines red, green, and blue color components

for each individual pixel. True color images do not use a color map. The color of each pixel is determined by the combination of the red, green, and blue intensities stored in each color plane at the pixel‘s location. Graphics file formats store true color images as 24-bit images, where the red, green, and blue components are 8 bits each. This yields a potential of 16 million colors. The precision with which a real-life image can be replicated has led to the commonly used term true color image [25].

Figure 4. Color Image.

2. X-Ray images

The very first X ray device was discovered accidentally by the Germanscientist Wilhelm Röntgen (1845-1923) in 1895. He found that a cathode-ray tube emitted certain invisible rays that could penetrate paper and wood and, the first person in the world to see through human flesh, even saw a perfectly clear outline of the bones in his own hand. Röntgen studied these new rays–which he called x rays–for several weeks before publishing his findings in December of 1895. For his great discovery, he was given the honorarytitle of Doctor of Medicine and awarded the 1901 Nobel Prize for physics. Adamant his discovery was free for the benefit of humankind, Röntgen refused to patent it[26].

X rays are waves of electromagnetic energy which behave in much the same way as light rays, but at wavelengths approximately 1000 times shorter than the wavelength of light. X rays can pass uninterrupted through low-densitysubstances such as tissue, whereas higher-density targets reflect or absorb the X rays because there is less space between the atoms for the short waves to pass through. Thus, an x ray image shows dark areas where the rays traveledcompletely through the target (such as with flesh) and light areas where therays were blocked by dense material (such as bone). Following the discoveryof x rays in 1895, this scientific wonder was seized upon by sideshow entertainers who allowed patrons to view their own skeletons and gave them picturesof their own bony hands wearing silhouetted jewelry.

The most important application of the x ray, however, was in medicine, an importance recognized almost immediately after Röntgen’s findings were published. Within weeks of its first demonstration, an x ray machine was used inAmerica to diagnose bone fractures. Thomas Alva Edison invented an x-ray fluoroscope in 1896, which was used by American physiologist Walter Cannon (1871-1945) to observe the movement of barium sulfate through the digestive systemof animals and, eventually, humans. In 1913 the first x-ray tube designed specifically for medical purposes was developed by American chemist William Coolidge. X rays have since become the most reliable method for internal diagnosis.

At the same time, a new science was being founded on the principles introduced by German physicist Max von Laue (1879-1960), who theorized that crystals could be to x rays what diffraction gratings were to visible light. He conducted experiments in which the interference pattern of x rays passing through acrystal were examined; these patterns revealed a great deal about the internal structure of the crystal. William Henry Bragg and his son William LawrenceBragg took this field even farther, developing a system of mathematics that could be used to interpret the interference patterns. This method, known as x-ray crystallography, allowed scientists to study the structures of crystals with unsurpassed precision and is an important tool for scientists, particularly those striving to synthesize chemicals. By analyzing the information within a crystal’s interference pattern, enough can be learned about that substance to create it artificially in a laboratory, and in large quantities. This technique was used to isolate the molecular structures of penicillin, insulin,and DNA.

Modern medical x-ray machines are grouped into two categories: “hard” or “soft” x rays. Soft x rays, which operate at a relatively low frequency, are usedto image bones and internal organs and, unless repeated excessively, cause little tissue damage. Hard x rays, very high frequencies designed to destroy molecules within specific cells thus destroying tissue, are used in radiotherapy, particularly in the treatment of cancer. The high voltage necessary to generate hard x rays is usually produced using cyclotrons or synchrotrons (variations of particle accelerators, or atom smashers).

In 1996, Amorphous silicon x-ray detectors were introduced which produce real-time, high resolution images by converting x-rays into light, the light into electrical signals which are interpreted by a computer, which produces digital data displayed as digital images, which can be enlarged to target aspecific area. Images are filmless and instantly available, formatted for electronic storage and/or transmission. First applied to mammography, this technology reduces radiation, cost of film and storage, and can be used in industrial applications. Also in 1996, researchers at NASA’s Marshall Space FlightCenter developed the high resolution or high brilliance x ray which generates beams 100 times more intense than conventional x rays. These beams can be controlled and focused by reflecting them through tens of thousands of tiny curved capillaries, much as light is directed through fiberoptics.NASA is using this instrument to define the atomic structure of proteins foruse as blueprints in designing drugs. It may also initiate smaller, less expensive, and safer x-ray sources [26].

3. Image enhancement

Image enhancement is concerned with the sharpening of image features such as edge or contrast, and has been employed to improve the visual appearance of images. A variety of image enhancement approaches have been proposed for medical images, such as histogram equalization [27], unsharp masking [28, 29], etc. These approaches can be generally classified into two categories: global and local (adaptive) enhancements. A global enhancement applies a single transform or mapping to all image pixels, but a local

enhancement uses an individual mapping on the local area of a processing pixel. The global enhancement methods may work well for some images, but bad for most images such as non-un

The post Research Proposal: X-ray Images Enhancement appeared first on mynursinghomeworks.

[ad_2]

Source link

Determining BMD through Pulse-Echo Ultrasound

[ad_1]

SPECIFIC AIMS

Osteoporosis is one of the most common skeletal diseases. It is characterized by thinning of bone tissue and loss of bone mass density over time. Low bone density is one of the important risk factors for osteoporosis and high predictor of fracture. Increase of loss of bone density leads to an increase in bone fragility and susceptibility to fracture.

Currently, bone mass density testing is considered the best way to determine if a person has osteoporosis, osteopenia, or normal bone density for their sex and age. The gold standard BMD test is Dual-energy X-ray absorptiometry (DEXA), is currently the most accurate way measure to test for BMD. DEXA is able to measure BMD at high risk fracture sites in the central skeleton such as the hip and spine. However, it has disadvantages such as high costs, portability, radiation exposure, etc. New techniques for measuring BMD such as ultrasound do not provide a definitive diagnosis due to some limitations. Current ultrasound technique is not able to test in high risk fracture bones such as the spine and hip. Quantitative ultrasound techniques have been shown to offer similar ability to diagnose osteoporosis as DEXA (Hans et al. 1996; Njeh et al. 2000; Frost et al. 2001; Glüer et al. 2004). Most clinical QUS devices are based on the through-transmission measurement of calcaneus (Njeh et al. 1997), which is not a typical fracture site (Hasselman et al. 2003). BMD at the calcaneus bone is not always a sufficient predictor for bones in the central skeleton.  The correlation between the calcaneus’s bone BMD and hip’s bone BMD is not high enough when determining fracture risk at bones of the central skeleton such as the hip.  A new method should help determine the BMD at the fracture site directly rather than using other bone as a predictor. Pulse-echo ultrasound technique sounds promising as it uses only one transducer rather than two as the through transmission technique which could possible test BMD at the hip as the gold standard DEXA diagnosis.

The proposed research will test two hypotheses: 1) Pulse-echo ultrasound has equal or higher accuracy in determining the BMD at the calcaneus bone as the through transmission ultrasound. 2) Demonstrate pulse-echo ultrasound is able to determine BMD of bones surround by soft tissue in the central skeleton such as the hip.

AIM 1: Determine the BMD of the calcaneus bone using a pulse-echo ultrasound technique The pulse-echo technique accuracy is going to be tested in determining the BMD at the calcaneus bone and comparing it to the current through transmission ultrasound technique. If found that the pulse-echo technique determine BMD at the calcaneus bone with the same or better than other techniques, further studies could be made on different fracture sites caused by osteoporosis than the actual ultrasound technique cannot perform.

AIM 2: Determine whether using a pulse-echo ultrasound technique accounts for soft -tissue surrounding the hip and increases accuracy in determining BMD We will test that using a pulse-echo ultrasound technique which uses only one transducer will be able to determine bone BMD at fracture sites that were not possible to evaluate with through transmission ultrasound technique. Based on the advantage of only one transducer and a new technique of ultrasound that take into account the tissue surrounding the bone studied. Results will be compared to the gold standard (axial DEXA) to determine BMD of the hip.

RESEARCH STRATEGY

A. SIGNIFICANTE TO MUSCULOSKETLETAL HEALTH

Osteoporosis is currently one of the highest occurring bone diseases. Osteoporosis means porous bones with increase porosity the bones become more brittle. Osteoporosis is linked to high risk fractures, such fractures occur commonly in spine, hip and wrist.  Unfortunately, osteoporosis commonly remains undiagnosed until a fracture occurs. Previous studies have shown that a bone mineral (BMD) density test is the best way to determine the bone health of an individual. BMD test can identify three key aspects if there is actually osteoporosis, determine the risk for fractures and measure response to osteoporosis treatment.  The gold standard test for BMD is called a dual-energy x-ray absorptiometry, or DEXA test. Although DEXA test gives accurate BMD results it has some drawbacks. Limitations to a DEXA test are it has a high cost, radiation exposure and low portability. Other test currently used to determine BMD include CT and Ultrasound. Ultrasound which is a possible alternative to DEXA has still some limitations which decrease its accuracy to determine BMD.

B. INNOVATION

Based on previous studies using Ultrasound technique to determine BMD we hypothesize that by applying an ultrasound pulse-echo approach we can determine with high accuracy BMD on bones with high risk fracture in central skeleton such as the hip. If found the ultrasound pulse-echo technique has a similar accuracy such as the DEXA we would have a new diagnosing test that has a low-cost, no radiation exposure and high portability compared to the gold standard in BMD determination.

Overlying soft tissue induces significant errors in bone ultrasound measurements

Previous studies have shown that the current clinical ultrasound through transmission technique suffer from measurement uncertainties that are related to soft tissue surround the bone studied. Soft tissues overlying the bone have a major impact on the measurement parameters in a BMD test. The variable thickness and composition of the soft tissue layer overlying the skeletal bones significantly increases uncertainties in bone US measurements (Kotzki et al. 1994; Gomez et al. 1997; Johansen and Stone 1997; Chappard et al. 2000; Riekkinen et al. 2006). Central skeletal bones such as hip which have a high risk fracture are surrounding by moderate quantity of soft tissues causing errors with the information received by an ultrasound transducer. Previous studies have shown that in order to reduce or minimize such errors related to soft tissue surrounding the bone a selection of optimal US frequencies must be determined. Ultrasound pulse-echo technique which only uses one transducer may determine and reduce soft tissue at central skeletal bones such as the hip which is impossible to do with the current through-transmission ultrasound technique.

Most clinical QUS devices are based on the through-transmission measurement of calcaneus

Most bone ultrasound devices are designed for through-transmission measurements of the calcaneus bone. Current ultrasound devices are only able to diagnose bone in the peripheral locations such as the heel. Clinical US devices used currently utilize the through transmission technique which require the use of two transducers placed on opposite ends of the bone studied. This technique accuracy is highly affected by soft tissue surrounding the bone analyzed leading to the heel as the optimal bone for BMD measurement.

Fig.1 Quantitative Ultrasound (QUS) is a high frequency sound wave. We may measure how quickly sound travels through bone, this is termed velocity which is measured as meters per second (m s-1), or how much sound is absorbed by the bone, generally referred to as Broadband Ultrasound Attenuation

Using the pulse-echo ultrasound technique only one transducer would be required and the backscattered wave would be analyzed rather than the attenuation. The flexibility of one transducer lead to predict more accurately by direct measurement at high risk fracture sites such as the hip.

C. APPROACH

Overview:

In previous studies show an urge for surpassing the limitations on the current ultrasound technique while still keeping the advantages that promotes ultrasound over DEXA in BMD determination. In the proposed research, we will

  1. Determine whether pulse-echo ultrasound has equal or higher accuracy in determining the BMD at the calcaneus bone as the through transmission ultrasound (AIM 1), and
  2. determine whether pulse-echo ultrasound is able to determine BMD of bones surrounded by soft tissue in the central skeleton such as the hip (AIM 2).

Experiments AIM 1

These studies will determine the accuracy of using a pulse-echo ultrasound to determine BMD at the calcaneus bone. The results will be compared to a trough-transmission ultrasound. Bone density of the heel bone will be measured with an ultrasound transducer in two different modes through transmission and pulse-echo. This experiment will be carried in-vivo were 100 volunteers both men and women between 40-50 years of age will participate. In large-scale screening studies in the general population, quantitative ultrasound measurement of the bone (QUS) has been used to identify people at risk for developing osteoporosis and fractures (Hollaender et al., 2009; Khaw et al., 2004). The ultrasound transducer beam will angled slightly posteriorly to maintain the beam axis as close to perpendicular to the plantar surface of the os calcis as possible. Three measurements at slightly different locations along the axis of the os calcis will be obtained for the two different ultrasound modes. Experiments are summarized in Table 1.

Table 1: Summary of experiments and analyses for Experiment 1: (N=100/group)

Ultrasound technique mode groups

Parameters Analyzed

Through-transmission

SOS (Speed of Sound)

Pulse-Echo

BUA (Broadband ultrasound Attenuation)

 

BUB (Broadband ultrasound backscattering)

Expected Results and Interpretation of Data

Using the broadband ultrasound attenuation (BUA) and speed of sound (SOS) parameters for through-transmission mode t-score will be derived in order to obtain bone mineral density at calcaneus bone. Same procedure will be completed with pulse-echo mode but using different parameters such as broadband ultrasound backscattering. We expect a high correlation between when comparing BMD obtained with through-transmission mode and pulse-echo mode.

Experiments AIM 2

These studies will determine whether pulse-echo ultrasound is able to determine BMD of bones surrounded by soft tissue in the central skeleton such as the hip. During recent years, numerous studies have focused on developing pulse-echo ultrasonic techniques for the characterization of trabecular bone and for aiding in the diagnostics of osteoporosis (Chaffai et al. 2002; Hakulinen et al. 2004, 2005, 2006; Hoffmeister et al. 2002a, 2002b, 2006; Padilla et al. 2008; Riekkinen et al. 2007a, 2007b; Roberjot et al. 1996; Roux et al. 2001; Wear 1999, 2003, 2008; Wear and Laib 2003; Wear et al. 2005). Based on the results obtained in experiments AIM 1, we will determine if using pulse-echo ultrasound mode for determining BMD at heel is a good indicator when predicting BMD at peripheral bones such as the hip. However, the main purpose of this aim is to evaluate efficacy of this new ultrasound mode to determine BMD in bones that have a high risk fracture and to compare the results with the gold standard which is axial DEXA when evaluating the BMD at the hip. We will determine the BMD of 30 female volunteers over 50 years old, which is the age were most of the fractures due to osteoporosis or low BMD commonly occur. Previous to evaluation volunteer information would be collected such as BMI and menopause age. Single ultrasound transducer will be used to determine BMD at hip.

Table 2: Summary of parameters analyzed and diagnosing techniques for Experiment 2: (N=30/group)

Diagnose techniques

Parameters Analyzed

Pulse-Echo Ultrasound

BUB (Broadband ultrasound backscattering)

Axial DEXA

 

Expected Results and Interpretation of Data

Using the ultrasound pulse-echo technique we expect to determine BMD at hip with high accuracy similar to DEXA since using the broadband ultrasound backscattering (BUB) parameter we may account for the soft tissue effect that commonly introduces errors when using ultrasound. Soft tissue surrounding the hip would be analyzed by measuring the reflection from the surface of the bone with two frequencies the thickness of lean and adipose tissues.

Methods and Analysis AIM1:

  • Ultrasonic measurements will be performed by using two broadband focused transducers in order to compare it to the current ultrasound technique (through transmission). The device could will be used in two different modes: transmission and backscatter
  • Before bone density measurement, a questionnaire will be taken for each individual. This questionnaire includes: basic information, body measurements (body weight, body height, WHR), family history of osteoporosis and fracture.
  • Both heels left and right would be measured three times for their BMD
  • For this study the WHO criteria will be applied, classifying patients with BMD with a standard deviation of over 2.5 lower than the average for a young adult (T-score < -2.5) as osteoporotic and patients with a T-score of between -1 and -2.5 as osteopenic.
  • Both T-score and Z-score will be computed. The T-score results from the comparison of the volunteers’ bone status with the average peak value in healthy young people and the Z-score provides a context for a participants’ bone status by comparing individual measurement values with the mean value for people of the same age and gender.

Methods and Analysis AIM2:

Similar methods and analysis as in AIM 1 will be executed. AIM 2 will use a backscattering mode in the ultrasound system to be able to determine the BMD at volunteer’s hip. The backscatter coefficient will be measured as proposed in previous studies by using a substitution technique, in which the signal scattered from the region under test is compared with the signal from a standard reflecting target (Ueda M et al. 1985). Results will be compared to axial DEXA BMD measurements.

KEY PRELIMINARY STUDIES

Previous studies (Riekkinen O, Hakulinen et al. 2008 )introducing pulse-echo ultrasound have shown that soft tissue error present in determining BMD with ultrasound can be removed or decreased with this new technique. Broadband ultrasound backscattered parameter can be used as well as other pulse-echo parameters to reduces the soft tissue error with a numerical method (Riekkinen O et al. 2001).

New ultrasound method for soft tissue correction of bone ultrasound measurements is introduced. The validation with elastomer samples demonstrated significant improvement in accuracy of ultrasound measurements.

In living tissues, the dual-frequency ultrasound technique reduced the mean soft tissue–induced error in BUB and in IRC (at 5.0 MHz) from 58.6% to –4.9% and from 127.4% to 23.8%, respectively. Values (mean +/- SD) of IRC and BUB in human trabecular bone (Table 3)

In Fig.2 (a, c) The mean values of pulse-echo parameters, IRC and BUB, before and after the soft tissue correction and as measured without surrounding soft tissues. (b, d) The soft tissue-induced error increased as a function of US frequency. The error could be reduced by means of numerical correction.

The post Determining BMD through Pulse-Echo Ultrasound appeared first on mynursinghomeworks.

[ad_2]

Source link

Drug and Alcohol Interventions: Analysis of Benefits

[ad_1]

Research proposal

Introduction

This study seeks to look at whether drug and alcohol interventions are of benefit to that of the service user, especially from an adult perspective. It will seek to address the help that is out there to help individuals who might recognise the need to be rid of their addiction and to be restored back to their normal routine life, before the addiction gets a hold of them any further. A qualitative approach will be used; this is to best understand the experience that they have faced, and the method of interviews will be used to help gather concrete data. When a person becomes addicted, the individual no longer consumes alcohol or drugs for the fun of it or to get high. But in actual fact, the person with the addiction now needs the alcohol or the drugs in order to perform on a day by day basis. One might say in some circumstances, the addicted person’s daily life will revolve around fulfilling their need for the substance on which he or she is hooked.

Literature Review

Intervention is the course of action for which an individual take advantage of when all other options has been exploited in an attempt to help a person conquer a drug or alcohol problem. It is an intentional method used by which change is introduced into an individual’s thoughts, that of their feelings and behaviour. The process of drug intervention normally seeks reinforcement from a wide variety of service providers. In addition to specialist addiction services, this may include general practitioners, pharmacists, hospital staff, social workers, and those working in housing, education and employment services, who sees it crucial to approach individuals whom they recognise are self-destructing themselves. The National Treatment Agency for Substance Misuse (NTA) is a special health authority within the NHS, established by Government in 2001, to improve the availability, capacity and effectiveness of treatment for drug misuse in England, NTA (2007). The NTA has reasoned that there is absolute need for combined and harmonised input from a diverse range of professional groups. However in such case it should be that the local regions offer substance misuse individuals the choice of generic and specialist interventions (NTA 2006).

“Illicit drug users have multiple and complex needs, including high levels of morbidity and mortality, domestic and family problems, homelessness, physical and sexual abuse, and unemployment” (Neale 2002).

However in order to get help the person struggling with the addiction must first of all recognised the need for help. Habitually an individual with substance misuse issues finds it hard to come to terms in accepting the fact that they do have a problem, by acknowledging this it is as if the world around them is at fault or that one’s causing a commotion over nothing. Folks who are uncompromising in regards to their addiction do not recognise the gravity of their problem. What matters to them is attaining the drug, despite the consequences. Neither health nor Legal are taken into considerations.

“Alcohol & Drug Services has valued its involvement with ITEP. The project has delivered immediate and tangible, benefits for clients though mapping interventions that are clear, straightforward and meaningful.” Hogan. T. 2007. (Alcohol and Drug services)

The International Treatment Effectiveness Project (ITEP) is branch of the National Treatment Agency’s Treatment Effectiveness strategy, which acknowledges matters for improving the excellence of treatment interventions. ITEP employs intervention to support care development which is referred to as “mapping” in the structure of a changing pattern guide. ‘Mapping is a visual communication tool for clarifying shared information between client and key worker. It helps clients to look at the causes and effects of their thinking and also assists in problem solving’. NTA (2007). This is used by qualified key workers along with their services users; this is in the format of maps which consist of five different stages and it shows the phase by which a client go through in order to get to the point where they then acknowledge that they may have a serious drug problem. Besides the mapping, the treatment manual included a concise intervention designed to change clients thinking patterns. This helps them to explore self and recognise the stage in which they are at, it highlights their strengths, things that matters to them most in life for example decision making, social relationships, careers and there morals and beliefs and how best they can improve their life It was envisage that services instigating this treatment manual would see a improved and encouraging change in service users self assessments of their treatment understanding over a period of time, in comparison to that of clients in services who had somewhat or no mapping. Research has said that the alcohol and drug services has valued the involvement with ITEP, it claimed that the project has provided direct and substantial assistance to that of the service users.

Another programme that works alongside National Treatment Agency is that of the Drug interventions programme. This plays an important role in dealing with drugs and the decline of crime. Instigated in the year 2003, it was aimed at adult substance misuse criminals who specifically use Class A drugs, like for example heroin and cocaine and this is was aimed at helping them to get out of crime and to get on treatment and other support that is available to them. It is stated in the Drug Intervention Operational Hand Book that above £900m overall has shown interest in DIP since the programme has been established and readily available is constant financial support to guarantee that Drug Intervention Programme progression grows to be the reputable way of working with drug misusing offenders across England and Wales. Majority of these offenders who makes use of the Drug Intervention Programmes are amongst the most difficult to reach and most challenging drug misusers, and are offenders who have not formerly had access to treatment in any significant way before. The advantage of DIP is that it concentrates on the requirements of the offenders by sighting innovative ways of inter-professional working, whilst linking pre-existing ones, across the criminal justice system, healthcare and drugs treatment services along with a variety of other assistance and rehabilitative services. It is stated that the Drug Intervention Programme and the Prolific other Priority Programme (PPO)are similar in their joint intention to diminish drug associated wrong doing by switching Prolific and other Priority Programmes into treatment, rehabilitation and other support services. The Improving Tier 4 provision quality service is a fundamental part of the National Treatment Agency’s (NTA) Treatment Effectiveness strategy. This associates the responsibility that the entire stakeholder sectors can participate in cooperation with finding solutions and improvements. The Tier 4 service provision offers supportive responses to drug misuser’s whose consume has been ongoing, intake is quiet a substantial amount, individuals with complicated needs, and this can allow the drug users to move forward in the direction of long-term self-restraint when and where convenient. Institutionalise services can also admit and support disordered clients. However some Tier 4 service arrangement may perhaps also have a significant function to participate in whilst entertaining individuals aside from continual substance misusing livelihood by intervening early. In accordance with this, the NTA has already produced guidance on commissioning Tier 4 service provision, specifically the Models of Residential Rehabilitation for Drug and Alcohol Misuser’s (NTA, 2006d) and Commissioning Tier 4 Drug Treatment (2006b).

Tier 4 consists of two different but related categories of service provision as defined by Models of Care: inpatient treatment (IP) and residential rehabilitation (RR). Aftercare (AC) is a closely related category of service provision (see Annex 1-3 for definitions) This document seeks to be clear as to which type of provision is being referred to at any given point – denoted by IP, RR and AC. The term “Tier 4” is only used when the guidance could apply to all interventions. It is assumed that all references to Tier 4 provision will have due regard to integrated care pathways with Tier 3 or Tier 2 provision and with aftercare. Aftercare is not always residential and can take a range of different forms when delivered in a community setting. In addition, may need to consider the wider context of mainstream health and social care commissioning initiatives when reading this guidance – notably the requirement of local authorities and primary care trusts to form health and wellbeing partnerships and carry out joint strategic needs assessments of their populations, in accordance with the Local Government and Public Involvement in Health Act 2007.

Aim

How do drug and alcohol interventions in health and social care benefits service users?

The study also seeks to test the following hypothesis whether it is true or not.

Hypothesis:

H1: Drug and alcohol interventions in health and social care benefits service users.

Null: Drug and alcohol interventions in health and social care will not benefit service users.

Methodology

Qualitative data

Qualitative data refers to expression or images, method used for interpretation. Qualitative data does not survive ‘out there’ waiting to be exposed, but are shaped by the way they are interpreted and used by the researcher. The character of qualitative data is seen to be wholesome and intact by the act of research itself. Qualitative approach investigates the importance of in depth understanding for a research topic as experienced by the participants of the research. The qualitative approach has been used to study extremely complex experience which can be understood without being expressed in words (Bradbury & Lichtenstein, 2000), others have suggested studies that justify answers like “ what” or “how” type questions would be careful in using qualitative approach (Lee et al.,1999). Qualitative research usually does not seek to calculate or evaluate objects under examination using numbers, as this is an approach which deals within the quantitative domain. The profundity of qualitative data develops on or after the conversation between the researcher and the participant; the insights achieved throughout this course of action can only be achieved given the interaction between the two.

Research Strategy:

The research strategy chosen is the plan of answering the research questions (Saunders et al, 2000). It is a choice on the methodology to be used and how it is to be used (Silverman, 2005). The research strategy seeks to classify the alternative strategies of inquiry according to quantitative, qualitative and mixed method approaches (Creswell, 1998). From this research strategy a phenomenology approach is used. A phenomenology sample comes from the word philosophy and it provides a framework for a method of research. ‘It is based within the Humanistic research theory and follows a qualitative approach’ Denscombe, 2003. The aim of phenomenological sampling is to investigate fully and describe ones lived experience. ‘It stresses that only those that have experienced phenomena can communicate them to the outside world’ Todres and Holloway, 2004.

The phenomenological research strategy as a result answers questions of significance in accepting an experience from those who have experienced it. The phenomenological term ‘lived experience’ is identical with this research approach. ‘Phenomenology consequently aims to develop insights from the perspectives of those involved by them detailing their lived experience of a particular time in their lives’ (Clark, 2000).this sampling is about searching for meanings and essences of the experience. It gathers descriptions of experiences all the way through hearing the first-person accounts during informal one-to one interviews. These are then transcribed and analyzed for themes and meanings (Moustakas, 1994) allowing the experience to be understood. Husserl’s phenomenological enquiry originally came from the certainty that untried methodical study may perhaps not be the best to use to revise human phenomena and had become so detached from the fabric of the human experience, that it was in fact hindering our understanding of ourselves (Crotty, 1996). He then felt driven to start up a thorough discipline that found truth in the lived experience (LoBiondo-Wood and Haber, 2002).

Quantitative v Qualitative:

Quantitative data lend themselves to various forms of statistical techniques based on the principles of mathematics and probability. In contrast, qualitative research is suited to investigating and seeking a deeper understanding of a social setting or an activity as viewed from the perspective of participants ( Bloomberg and Volpe, 2008).

Qualitative research is concerned with the nature, explanation and understanding of phenomena. Unlike quantitative data, qualitative data are not measured in terms of frequency or quantity but rather are examined for in-depth meanings and processes (Labuschagne, 2003). Silverman (2006:42) warns that quantitative research can amount to a “quick fix” approach involving little or no contact with people or field and has been deemed inappropriate for understanding complex social phenomena.

Approach:

Typical methods used in qualitative research are structured interviews, surveys, structured observations and potentially a focus group. This is where the researcher places his or herself in the midst of the participant for a while, learns from that persons only when in the presence. Silverman (2006) recommends a qualitative philosophy to be appropriate when the researcher seeks to investigate an incompletely documented phenomena and aiming to provide a better means understanding of social phenomenon where processes are involved. Even without wanting to shift entirely away from a purely quantitative view of health, many people now appreciate that a basic understanding of qualitative research can have a positive effect on our thinking and practice. It offers new ways of understanding the complexity of health care, new tools for collecting and analysing data, and new vocabulary to make arguments about the quality of the care we offer. As a consequence of our enhanced learning, we come to realize that qualitative research is neither a sham science nor a poor substitute for experimentation.

Interviews:

Interviews will be my method by which to gather data for this research. They are generally used in assembling data in qualitative research. ‘They are typically used as a research strategy to gather information about participants’ experiences, views and beliefs concerning a specific research question or phenomenon of interest’ (Lambert and Loiselle, 2007). Important types of interviews are identified by Babbie (2007) they are known as standardized interview, the semi-standardized interview and the unstandardised interview. The distinctions regarding each type are predominantly concerned as to how the interview is structured.

Interview process:

Individuals will be chosen from a population 200 service users who attend on a weekly basis the local drug drop in centre for counselling, rehab or to be signed posted to other agencies who might be of help. Such individuals might be undergoing drug or alcohol interventions treatment to help them steer away from their addiction. Sample target will be aimed towards adults who may be institutionalised or living at home, but are faced with the challenges of been an addict and are trying to seek help. The size of participants will be 10 and have residency within the Northamptonshire area. Interviews notifications were sent in advance, as to prepare participant. A consent form prior to interviewers visit was sent (see Appendix A), and participants were provided with an outline of the types of questions (see Appendix B) that might be asked at the interview. This was to enable that they had adequate time to prepare and reflect what it is they would like to share and also to ensure interviewer collected the right information from interview. In a qualitative interview it is important that the questions capture the interviewee’s perceptions and not those of the researcher (Perry, 1998). This is mostly to verify that the responses given were not probed by the interviewer.

The interview was carried out the local drug and alcohol drop in center in a room away from other clients. This was to enable full concentration and for them to be more open, as they might feel embarrassed about the issue at hand. The researcher asked questions at the interview scheduled which can be found in (Appendix B).During the interview a soft approach was taken to give the participant a chance to settle down and relax. For such reason an easy question was asked to start off with, something which the interviewee might have had time to formulate views on already. The interviews took twenty five minutes per participant and notes were recorded during the interview. A convenience sample best represents the direction of this research as it generally assumes a consistent population, and that one person is pretty much like another.

Data Analysis

The data gathered from the interviews shows concrete evidence in relation to that of the information shown in the literature review. Though not a sufficient amount of data from the literature review to speak on behalf of the service users as to how they felt whilst going through the different treatments, the interviews really helped in shedding some light as to what they thought. When asked the question how they recognised they needed help, some raised the issue that they recognise that their family lives were a mess, were not able to hold down employment and other issues. Responses received from the interviews where somewhat shocking, as some found they were still struggling to be rid of their addiction whilst others were trying to get back to norm within society. The individuals who shared that they were still finding it a bit difficult was due to the fact that the environment which they still remained in, did not help them to refrain but rather tempted them more, for some this was the challenges they faced. Others recognised that the intervention treatment centres out there were readily available to help them which one can say is a good sign for them.

Ethical Consideration

Qualitative research confronts ethical issues and dispute exclusively to the study of human beings. Standard knowledge in areas such as physics, chemistry and biology permits the researcher to presume a point of view separate from the purpose of study occurrence in questioning.

Confidentiality is an important ethical concern for most when considering a rehab program or other drug interventions treatment. Each individual in recuperation may have experiences they may not feel comfortable sharing with everyone. It is therefore important for not just doctors, but for other inter-professional members to respect the confidentiality of each person in they are treating. Giving permission for the individual to come to terms with their experience which is part of the rehab procedure, and it is not somewhat to be hastened or taken for granted. Permitting the individual who might be feeling emotional the opportunity to heal their wounds from the drug and alcohol abuse is vital for recovery.  This is why it is imperative that a client enquire what the confidentiality policies are before registering unto a treatment program.

Ethical standards of care have been established by numerous national groups and organizations, to help support and identify quality care within the industry. For example, the National Association of Social Workers has a specialization program just for professionals who deal with Alcohol, Tobacco and Other Drug (ATOD) problems.  The American Society of Addiction Medicine (ASAM) is another group that supports increasing the quality of addiction treatment by establishing “addiction medicine as a specialty recognized by professional organizations, governments, physicians, purchasers and consumers of health care services, and the general public.” Becoming aware of the ethics of addiction treatment can gives you the insight necessary to ask informed questions about treatment before embarking on the road to recovery.

Conclusion

Appendix A

August 2010

To whom this may Concern,

My name is Shauna Grant a researcher from the University of Northampton. I got hold of your information from the organisation which you attend daily drop in sessions, so therefore I decided to contact you. My research requested access from you in order to conduct it, as I understand that you fit my criteria for my area of study.

As part of my research I am undertaking an examination to see whether the interventions provided by the healthcare and social care services are of great benefit to you, and does it help you steer away from your addiction. The objective of my study is to best understand what it is like for you to deal with the addiction once it has gone so far.

In order to undertake this research, I would be really grateful if you could give consent for me to carry out my research in the form of short interviews which will last up to 45 minutes with just myself been the researcher in your own domain. Notes will be taken at the interview and everything said will remain confidential between us.

I look forward to your reply and for us to discuss the matter at hand further.

Yours sincerely

Shauna Grant

Appendix B

Interview schedule

How did you recognise you needed help to stop taking drugs or drinking alcohol excessively?

What support did you get from the inter-professional workers?

Explain the challenges you faced in your decision to stop taking the drugs or alcohol?

What benefits do you think you’ve gained from the interventions been introduced to you?

What has been your experience from using the interventions services?

Do you think there are enough services around to help you, if and when you do decide to refrain drugs or alcohol?

The post Drug and Alcohol Interventions: Analysis of Benefits appeared first on mynursinghomeworks.

[ad_2]

Source link