Saturday, December 28, 2019

The Homeless Population And The Health Care Act Essay

The Homeless Population and Barriers to Health Care There are currently 564,708 homeless individuals in the United States (U.S.), however this is just an estimate as there are probably hundreds that go uncounted, during PIT (point-in-time count) or remain unregistered with non-profit agencies providing services (The National Alliance to End Homelessness, 2016). Before the Affordable Care Act (ACA) most homeless individuals did not have health insurance, as provisions for these individuals as well as the low-income population could only be accepted into the Medi-cal / Medicaid program, if they had children that were eligible. Since ACA was implemented a large percentage of the homeless are insured, but, this does not mean that the preexisting gaps and barriers to access health care do not exist. They do. Being homeless has been found to correlate to a poor health status (Robert Wood Johnson Foundation, 2016). In fact, homeless individuals are at risk and experience more chronic illnes s than someone who has housing. Additionally, once chronic illness develops in a homeless individual, they are at higher risk for comorbid conditions, new conditions (such as skin disorders and respiratory illness) and an acceleration in the development of their disease(s). Current barriers include: a mistrust of the medical community, not having a primary physician, not following through on regular medical appointments, and lack of a support system; all of these having the potential to causeShow MoreRelatedVulnerable Population the homeless vetrans Essay1728 Words   |  7 Pagesï » ¿ A Vulnerable Population the Homeless Veterans Patricia Dilbert NUR/440 April 7, 2014 Deanna Radford, MSN, RN, CNE A Vulnerable Population the homeless Veterans In this presentation, we will explore a vulnerable population with the focus on the homeless veterans. According to Mckinney Act†(1987) A homeless person is one who lacks a fixed, regular and adequate nighttime residence. One who has a primary nighttime residence that is a supervised publicly or privately operatedRead MoreThe Emergency Department Is The Unofficial Primary Care Facility For The Homeless880 Words   |  4 Pagesdisease, access to primary care, domestic violence, abuse, and intergenerational poverty are all factors contributing to the1.5 million people who experience homelessness each year (Doran et al., 2013; Zlotnick, Zerger, Wolfe, 2013). Compared to 12.3% of the general population, 44% of homeless people rate their health status as fair/poor (Seiler Moss, 2012). This statistic falls in line with research done by Zlotnick et al. (2013), which explains that the homeless population has higher rates of hypertensionRead MoreA Brief Note On Preventative Medicine And Education1130 Words   |  5 PagesEducation Poor health and homelessness has been connected through multiple studies. Having poor health can cause homelessness. On the flip side, being homeless can also cause poor health. Being homeless brings a list of complications including limited access to getting proper health care. This causes the health of the homeless population in the United States to be worse than that of the general population. Common health problems in the homeless population include: mental health problems, substanceRead MoreHealthcare and the Homeless Geriatric Population738 Words   |  3 PagesAmerica’s homeless. For a long time, the homeless were amongst some of the very few who did not qualify for quality healthcare services. The new Obama Administration has established funding and programs that allot Americans free health care, however, not all are aware of how to receive these services. Most homeless individuals are not even aware that they have been made available. This research proposal will discuss 1) the issues concerning the homeless population and their health; 2) the reasonsRead MoreHomelessness Intervention Paper : Homelessness1134 Words   |  5 Pagespay for housing, food, childcare, health care, and education† (nationalcoalitionforthehomeless.org). Housing accounts for a major percentage of income and often must be eliminated. â€Å"Two issues that contribute to increasing poverty are: eroding employment opportunities for large segments of the workforce and the declining availability of public assistance† (nationalcoalitionforthehomeless.org). The United States official definition of homelessness is: A homeless individual is defined as â€Å"an individualRead MoreHomelessness And Poverty Are Inextricably Linked920 Words   |  4 PagesIdentify the problem â€Å"Homelessness and poverty are inextricably linked. Poor people are frequently unable to pay for housing, food, childcare, health care, and education. Difficult choices must be made when limited resources cover only some of these necessities. Often it is housing, which absorbs a high proportion of income that must be dropped. If you are poor, you are essentially an illness, an accident, or a paycheck away from living on the streets. Two factors help account for increasing poverty:Read MoreHomeless Veterans Of Fayetteville Arkansas Needs Assessment921 Words   |  4 PagesHomeless Veterans of Fayetteville Arkansas Needs Assessment The population of focus for this needs assessment is homeless veterans in Northwest Arkansas; we explored many factors that causes homelessness within this population. Target Population More than one-third of homeless adults interviewed for the Northwest Arkansas PIT census were veterans of the United States armed forces (Collier, Fitzpatrick, O’Connor, 2015). The majority of the veterans interviewed were 92.5% male, 79.3% wereRead MoreLegal Factors Of An Urban Institute1683 Words   |  7 Pageseligible populations, the Congressional Budget Office has projected that only eight million will enroll in the first year (2014) and only 11 million two years after implementation (Congressional Budget Office, 2013). Issue Statement How can state legislatures improve access to care for the homeless population? Stakeholder Due to the magnitude of this issue there have been several interest groups, for the expansion of Medicaid. Stakeholders include advocacy groups such as, the Homeless Health CouncilRead MoreThe Development Of The Affordable Care Act762 Words   |  4 Pagesdevelop policies to impact the provision of health care was examined through the research of the Affordable Care Act established in 2010 and through the development of MACRA legislation. Together both political changes are working to improve health care and the outcomes of patients. Both are going to work together to ensure Americans receive quality health care and to assist in decreasing health care spending. The Medicare Access and CHIP Reauthorization Act (MACRA) was developed by the Centers forRead MoreHomelessness : An Epidemic Across The United States1066 Words   |  5 Pagesprograms that assist the homeless and homeless prevention programs is abysmal, while the costs incurred due to such a large homeless population contin ue to rise. Over the past century, a variety of acts and programs have been put in place that has dramatically affected the homeless population of the time, both positively and negatively. This problem can be directly linked to the outcomes of these acts and programs. In order to attack the root cause, the American population needs to look back at the

Friday, December 20, 2019

Essay on Ideal Women vs Real Women in Beowulf and The Wife...

In the Middle Age literature, women are often presented or meant to come off as an unimportant character; which can also reflect on how the author wants the women character represent. Women are usually shunned, have no say or control in what they do; due to what men desire; like Ophelia and Gertrude did in William Shakespeare’s Hamlet. But these female characters that I will discuss are women with power, control, and a voice. Majority of the female character’s appearances are made to represent wickedness, evil, or a seducer who challenges a man belief; and does not symbolize perfect women. In the epic poem Beowulf majority of the characters are males; with the exception of a few females in the poem. When going back to the†¦show more content†¦We can also question if the author/storyteller’s intentions were to actually give the females in Beowulf a real sense of what it is to be a woman or is the author referring to them as something of nature, or goddess like, nurturing and a non believer of religion. Also, one can believe that Beowulf represents Christianity and Grendel’s mother represents nature  ¬Ã‚ ¬, describing where she lives: And suddenly discovered the dismal wood, Mountain trees growing out at an angle Above gray stones: the bloodshot water Surged underneath. (1414-1417) Another female character in the poem is Wealhtheow, and without a doubt we notice that she is a female who is respected and admired, being the wife of King Hrothgar, Queen of the Danes: â€Å"Applause filled the hall./Then Wealhtheow pronounce in the presence of the company† (1214-1215). There is a great contrast between Grendel’s mother and Wealhtheow. In Geoffrey Chaucer’s The wife of bath’s Prologue and Tale, it is one of the many Canterbury tales that can bring us awareness of the women’s role in the middle ages. Even though Alisoun, who is the wife of bath is a female traveling with a group of men; she still manages to hold her own ground. She tells thr men in order to have a great

Thursday, December 12, 2019

Knowledge Management Social Media Definition

Questions: 1.How can social media such as LinkedIn, Facebook, and Twitter be used to improve? 2.knowledge sharing, build social capital, support innovation? Answers: Introduction Todays whole world is connected by the Social Media and it has slowly become the integral part of our life. Social Media is considered the best medium for entertainment but the limit of social media is not restricted to that only (Leonardi, Huysman Steinfield, 2013). Its limit is endless. From common man to business person also use the social media to promote their brand because with the help of Social media, they can promote their brand internationally. And this promotion process is very much cost effective as with minimum cost, they can share it to the whole world (Tess, 2013). It is also used as the medium for knowledge sharing where companies can educate their employees and customers. Social media also helps in establishing the social capital which can help companies immensely. With social media, companies can support innovation by funding the RD projects or any new startups and later they can make an agreement. It also helps in problem solving that if any customer is facing any problem, the person can directly connect with the companys official page and can get instant help. The only drawback of using the social media is the it is prone to hacks that any unauthorized person will take the control over their official page and will restrict them to help others. The following report is about interaction of companies with the help of Social Media and how they use it for knowledge sharing, increase social capital, support innovation and also help in solving problem. The report also discuss about the drawbacks of using the social media. 1. Social Media Social Media is a medium through which people of different countries and background connect with each other. It is a collective of online channels for communication which is fully dedicated to the community of people where interaction, content sharing and collaboration occur. Social Media platforms include Facebook, Twitter, and LinkedIn. Social media is gradually becoming the integral part of human life as they find it more refreshing and stress free area. With the help of Social media, many companies have also been benefitted from that in promoting their brands. With the help of social media analytics, better business decisions can be made by collecting data from blogs and social media websites. Social media is generally used by companies to see what people think about them and how good their services are (Gibbs et al., 2015). Social media also helps in giving exposure to companies so that they can get the correct exposure and broaden their customer reach. Social media helps in cre ating a page where people are invited and likes their page. It is also a kind of business strategy where they conduct their survey and communicate with their customers who liked their page. Also social media gives a platform to customers to interact with their favorite brand and can know much more about them. Social media helps the company to get feedbacks about their product so that they can change it accordingly and also they can know how the customers feel using their products. There are some problems in Social media that it is prone to cyber attacks where their official page can be hacked and offensive messages can be posted which can result in unsatisfied customer experience. About the chosen company 7-Eleven is a retail store which has many branches offshore. The franchisee and licensed stores are run by the brands and the number varies over 56,000 and in over 18 countries (Ngai, Tao Moon, 2015). Chain of stores run by 7-Eleven was earlier known as Totem Stores until it was renamed by 7-Eleven. The reason behind the companys name is that the store remains open from 7:00 Am to 11:00 pm and also the store remains open for 7 days in a week. The reason behind the stores popularity is that it remains open even at times when other stores are closed thus the customers could get whatever they want at any time (Klausen, 2015). Social Media in Knowledge Sharing There is a taboo among people that social media only means a medium of entertainment but uses of Social media are countless (Trainor et al., 2014). They are not only meant for entertainment rather they are used for knowledge sharing too. With the help of social media, information can be shared and circulated around the whole world just within few hours. With the help of Social media, 7-Eleven can educate their customers about the products and information exchange also takes place. Social media helps in transferring the knowledge to every part of the world without even moving from your place (Ellison, Gibbs Weber, 2015). Knowledge sharing can help any organization to attract more customers which will gradually benefit the company (Aral, Dellarocas Godes, 2013). With the help of knowledge sharing, they can tell their customers about their new products and can also give some glimpses of their new product in their line of manufacturing. With this, employees can increase their expertise . It is the lowest cost medium of transferring any form of messages to a large group of people and is also the fasted medium too. By taking the organizational factors into consideration, organizations can improve the condition of knowledge sharing (Saffer, Sommerfeldt Taylor, 2013). If there are any changes to be made in the piece of information, then it can be done easily and still it can reach to other people in no time. 2. Social Media in building of Social Capital Social Capital is the term where value of social relationships and networks that gives the economical growth to the organization is measured. Social media helps any company to get the social capital as many other organizations are also connected through the social media so they can exchange information and ideas and also can discuss about the future of their companies which they can implement to attract more customers (Tuten Solomon, 2014). With the help of social media, they can also know what other rival companies are up to and they can take cues from them to improve their business strategies. Social capital is very vital as it ensures trust among the co-workers, their satisfaction level and also the type of communication that occurs mainly between the peers, seniors and subordinates (Majchrzak et al., 2013). Social Capital also ensures the efficient working of employee. Main aim of social capital is to meet the organizational goals the best way possible. Twitter is the best examp le of creating the social capital as it includes all the groups like family, friendship, work, teams and many more. Building trust among the groups is very much important as it helps in co-ordination in different tasks. Social Capital basically works with heterogeneous group of people that is people of different backgrounds so that it can build a community that will make the work easier to do. Social capital also helps in creating a large group of people of different fields so that they can help each other when needed. Social media plays a great medium to have social capital (Nah Saxton, 2013). Role of social media in supporting innovation Social media can be proved as an excellent way to support and transfer information in each part of the organization. It is an incredible way to reach out to customers and take innovative ideas from them. Social media sites such as Facebook, LinkedIN and Twitter are great means of being in touch with the customers and also it helps the employees of the company to be in contact with each other (Anduiza, Cristancho Sabucedo, 2014). There are two ways to involve the companys network in development of new products and services. They are crowd sourcing and open innovation. In the process of crowd sourcing, the problem is shard in a particular area or group and the query is asked to be resolved by the members of the group. In the process of open innovation, the members of the company are asked to resolve a query and post it individually. There are various advantages of social media in the innovation. The advantages include- The creativity and wisdom of people outside of the workplace can be gained. Several queries can be posted online and various people can solve it (Kent, 2013). The most innovative response has to be chosen adding creativity in the companys products and services. If this is not done, the company has to be dependent on the previous ideas and hence the company will be deprived of the new and innovative ideas from the innovative minds. The innovation can also be posted online which will attract more customers to the company. Feedbacks can be taken online from different customers and thus the company can know about the changes that has to be made in the firm for better growth. The company can also post the ideas regarding the product and ask customers to choose the best among it and thus increasing transparency. Role of social media in problem solving Social media brings people together and hence, it can be proved as a great tool for problem solving. The ways in which social media can be used in problem solving are- If any problem occurs in the organization then it can be resolved using social media. The problem can be posted on the social media sites and the employees or the customers can be asked to slove the query. This way, the organization can get several ways to resolve the query. New and innovative minds will provide solutions and thus will provide effective results to solve the problem. There are various types of problems which arise in the organization on huge basis. If any employee faces problem in the organization regarding anything, he/she can post the problem on the social media site and thus can get huge support from other employees, the higher authorities can also know about the problem easily, and thus the problem can be solved within very short period of time. the higher authorities can also convey messages through social media sites and thus the message will be conveyed in a very short span of time and without any delay. Risks associated with the use of social media There are various risks that are associated with the use of social media in an organization. Those risks are- Use of social media results in lack of long duration motivation from the participants. In the traditional procedure, making any change or approving anything in the organization requires lot of paperwork. However, if social media is used, every change is made quickly and thus there is no prove of the change which can be harmful for the organization for the future purpose (Huang, Baptista Galliers, 2013). The organization has to face difficulty in management of projects of large scale and thus posing harmful effects on the company. Sometimes it happens that the authority is trying to create a social presence but the employees are not interested in it and thus wasting the time of authority. The biggest threat is that sometimes people pretend to be other people and post n their behalf. Thus, the information provided by the attacker can be fatal for the companys environment. At times, it also may happen that an individual has to say something important and thus, he/she may need attention but might be ignored. Conclusion Hence from the above discussion it is concluded that, social Media is considered the best medium for entertainment but the limit of social media is not restricted to that only. Its limit is endless. From common man to business person also use the social media to promote their brand because with the help of Social media, they can promote their brand internationally. Social media can help a firm to promote the business in different ways. It is also a kind of business strategy where they conduct their survey and communicate with their customers who liked their page. It helps in knowledge sharing, building of social capital, supports innovation in the firm and helps in the solving of many problems. There are various risks associated with social media as well. References Leonardi, P. M., Huysman, M., Steinfield, C. (2013). Enterprise social media: Definition, history, and prospects for the study of social technologies in organizations.Journal of Computer?Mediated Communication,19(1), 1-19. Tess, P. A. (2013). The role of social media in higher education classes (real and virtual)A literature review.Computers in Human Behavior,29(5), A60-A68. Gibbs, J. L., Eisenberg, J., Rozaidi, N. A., Gryaznova, A. (2015). The megapozitiv role of enterprise social media in enabling cross-boundary communication in a distributed Russian organization.American Behavioral Scientist,59(1), 75-102. Ngai, E. W., Tao, S. S., Moon, K. K. (2015). Social media research: Theories, constructs, and conceptual frameworks.International Journal of Information Management,35(1), 33-44. Klausen, J. (2015). Tweeting the Jihad: Social media networks of Western foreign fighters in Syria and Iraq.Studies in Conflict Terrorism,38(1), 1-22. Trainor, K. J., Andzulis, J. M., Rapp, A., Agnihotri, R. (2014). Social media technology usage and customer relationship performance: A capabilities-based examination of social CRM.Journal of Business Research,67(6), 1201-1208. Aral, S., Dellarocas, C., Godes, D. (2013). Introduction to the special issuesocial media and business transformation: a framework for research.Information Systems Research,24(1), 3-13. Ellison, N. B., Gibbs, J. L., Weber, M. S. (2015). The use of enterprise social network sites for knowledge sharing in distributed organizations: The role of organizational affordances.American Behavioral Scientist,59(1), 103-123. Saffer, A. J., Sommerfeldt, E. J., Taylor, M. (2013). The effects of organizational Twitter interactivity on organizationpublic relationships.Public Relations Review,39(3), 213-215. Tuten, T. L., Solomon, M. R. (2014).Social media marketing. Sage. Majchrzak, A., Faraj, S., Kane, G. C., Azad, B. (2013). The contradictory influence of social media affordances on online communal knowledge sharing.Journal of Computer?Mediated Communication,19(1), 38-55. Nah, S., Saxton, G. D. (2013). Modeling the adoption and use of social media by nonprofit organizations.New Media Society,15(2), 294-313. Mergel, I., Bretschneider, S. I. (2013). A three?stage adoption process for social media use in government.Public Administration Review,73(3), 390-400. Anduiza, E., Cristancho, C., Sabucedo, J. M. (2014). Mobilization through online social networks: the political protest of the indignados in Spain.Information, Communication Society,17(6), 750-764. Kent, M. L. (2013). Using social media dialogically: Public relations role in reviving democracy.Public Relations Review,39(4), 337-345. Huang, J., Baptista, J., Galliers, R. D. (2013). Reconceptualizing rhetorical practices in organizations: The impact of social media on internal communications.Information Management,50(2), 112-124.

Wednesday, December 4, 2019

The Hidden Cost of Convenience Essay Sample free essay sample

Human existences have had a relationship with the Earth through farming since 10. 000 B. C. . and to this twenty-four hours humans remain dependent on that really relationship. The development in farming techniques has exploded since the early 1900’s with the development of chemical pesticides and Genetic Engineering ( GE ) . GE harvests treated with pesticides produce larger measures of nutrient at an immensely faster rate. Approximately 70 per centum of processed nutrient semen from GE seeds and are treated with chemical pesticides. Because of the monolithic sums of nutrient that are being produced. the dollar disbursal is decreased for these abundant merchandises. This may look to be an astonishing disclosure in nutrient engineering. but what if the very processes which make nutrient available. low-cost. and convenient are what is doing modern Americans ill? What are you truly giving for convenience? Pesticides are substances used to destruct insects or any other beings ( plagues ) that are harmful to cultivated workss or animate beings. There are many fluctuations of natural pesticides and chemical pesticides. The usage of chemical pesticides raises a contention about the safety of our nutrient. and our environment. A figure of surveies conducted by the World Health Organization and the United Nations Environment Program suggest that the usage of pesticides is highly unsafe. These surveies conclude that about three million agricultural workers suffer unwellnesss such as malignant neoplastic disease from terrible poisoning due to pesticide exposure. Of this figure. 18. 000 dice every twelvemonth ( Drake ) . However. Mr. Rick Melnicoe. Director of the Western Integrated Pest Management Center and the UC Statewide Pesticide Coordinator claims. â€Å"it is the dosage that makes the toxicant and that there is virtually no unwellness associated with modern pesticide residue on nutrients. Illnesss that do occur are caused by abuse. exposure to concentrated degrees by workers. and basic stupidity† ( Safe Pesticides? ) . Overall. it is more or less of import ground why pesticides cause unwellness but instead that they do. and everyone should be cognizant of what is on or in their nutrient before they feed themselves or their loved 1s. We can merely be every bit healthy as the merchandises we put into our organic structures and our environment let us to be. Pesticides have been used by assorted husbandmans since 2. 500 B. C. . but these pesticides consisted of natural substances such as honey. salt. and S. In today’s universe. the most abundant types of pesticides are 1s dwelling of man-made chemicals. After World War II the agricultural effects of the chemical dichloro-diphenyl-trichloro-ethane ( DDT ) was discovered by Dr. Paul Muller. and was commercially produced worldwide. DDT seemed to be the perfect pesticide. it was easy to utilize. it seemed to hold low toxicity to mammals. and decreased insect-originated diseases. like xanthous febrility. malaria. and typhus. But a few old ages subsequently. insects and other plagues were found to develop a opposition to DDT. so it was deemed no longer as effectual. It was subsequently discovered that DDT was really toxic to the environment. caused malignant neoplastic disease and caused familial harm in animate beings. Rachel Carson. an American Marine Biologist. contended in her book Silent Spring that DDT entirely has irreversibly harmed animate beings and contaminated the world’s full nutrient supply. DDT is now much less abundant and is used chiefly to battle malaria. Many pesticides are still in usage to this twenty-four hours that have similar toxic effects of DDT and because of the of all time present chemical opposition in plagues more new pesticides are being developed. We are now in changeless demand of chemical development to combat natures version. To disrupt the balance of nature is to set many lives in harm’s manner. There is no argument that chemical pesticides harm the environment. The most unsafe types of pesticides are insect powders. antifungals. and weedkillers. These substances have contaminated H2O. poisoned and mutated wildlife. and disrupted natural growing forms in countries of contaminated dirt. Harmonizing to the Environmental Protection Agency. in 2001 4. 9 billion lbs of pesticide merchandises made up of 600 separate chemical compounds were used in the United States entirely ( Lah ) . Environmental wellness is straight linked to the sum of harmful toxins it is exposed to. so it would be in the best involvement of ourselves and of wildlife to cut down the overall usage of these unsafe chemical pesticides every bit much as possible. Genetically modified nutrients became present in the nutrient market in the 1960’s. but were non reintroduced commercially until the 1990’s. A figure of scientists have discovered and developed this engineering since the 1960’s and hold made GE present in anyplace from works seeds to livestock ( Arimi ) . Some GE nutrients have been proven in the yesteryear to be carcinogens. One of the largest chemical corporations in the universe called Monsanto has created a genetically modified-growth endocrine. Bovine. that is linked to a 2. 5 times greater incidence of colon. chest. and prostatic malignant neoplastic disease in worlds. 135 million estates. owned by big companies like Monsanto. are dedicated to turn GE harvests and raise farm animal in the United States ( Genetically Modified ) . Since the bulk of merchandises in our supermarkets are processed nutrient and the bulk of processed nutrient is unhealthy and comprised of these genetically modified being. one woul d so reasonably conclude that these merchandises do more injury than good in our society. Genetically engineered merchandises are the most abundant but least recognizable unreal substances present in our every twenty-four hours lives. Genetically Modified Organisms ( GMO ) are organisms that have an altered familial make-up either by cistron splice or endocrine injections to heighten a certain trait. For illustration. these unreal cistrons can do workss to let go of a certain chemical to end bugs and other natural marauders when being attacked. or do unnatural growing in the chests of poulets. Up to 75. 000 dairy cattles are treated with the bovine growing endocrine. and there are over 80 million estates of genetically engineered cultivated harvests ( Food Inc. ) . Seventy per centum of the nutrient in United States food market shops is made from GE merchandises. This nutrient largely consists of unhealthy obesity-causing merchandises like sodium carbonate and french friess. but they can besides be hidden in any type of nutrient. GE nutrient does non significantly smell. gustatory sensation. or look different. and are non specifically labeled. Therefore. the consumer has an deficient sum of informations to find the position of what they are devouring. There has late been a contention sing labeling nutrient that has been produced through GE. but no jurisprudence has yet come to go through that requires these labels. GM beings can besides harm the environment. In 2006 it was thought that GM harvests were to fault for the disappearing of 90 per centum of our honey bees which caused a concatenation reaction of ecological bad lucks. Honey bees are responsible for one tierce of America’s nutrient supply and many of our harvests were lost due to miss of pollenation. This in bend caused green goods to be less abundant the monetary value to skyrocket ( Disappearance of the Bees ) . This is merely one illustration of many catastrophes caused by manmade toxins. and the negative effects are going harder to disregard. The longer we wait to alter our harmful methods the more terrible the effects can go. It seems that bring forthing larger measures of nutrient at a fast rate would merely be good if we did non utilize GE and rough chemicals. Making the environment and the nutrient we eat toxic is the monetary value we have to pay for the convenience of our food market shops. Interrupting the balance of nature can be more damaging than many foresee. It is no secret that present twenty-four hours Americans are non the ideal image of wellness. Recent surveies have shown that 33 per centum of Americans are corpulent ( James ) . and 26 per centum have malignant neoplastic disease ( Alt. Cancer ) . While in the late 1800’s it was estimated that merely 10 per centum of Americans were corpulent and merely 9 per centum had malignant neoplastic disease. Both fleshiness and malignant neoplastic disease can be linked to the usage of pesticides and GE nutrients. In fact there have been direct links between these conditions and peculiar pesticides and GE merchandises. Many contend that the exclusive ground these diseases have dramatically increased since the late 19th century is the increased usage of pesticides and GE merchandises. With the figure of people who have fleshiness. malignant neoplastic disease and diabetes at an all clip high. many are seeking for reform in the manner we care for our organic structures. A figure of wellness experts claim that if we prioritize our well-being and alter our diets from eating extremely processed nutrient. to organic nutrient so we can work out this quandary. It is besides claimed that by merely altering our diet we can assist our environment every bit good as our wellness. The word â€Å"organic† is frequently seen. It is advertised in food market shops. and sometimes in films or in telecasting. but what does it intend to be organic? Is taking a measure towards an organic revolution the right measure to take in doing our state a healthier topographic point? The term â€Å"organic farming† was coined by an English agriculturalist named Lord Northbourne in 1940 in his agricultural book Look to the Land. Northbourne used this term to depict an all natural manner of farming opposed to one that integrated chemicals and other unreal substances. Most of his cognition about holistic agriculture was derived from analyzing old civilizations and carry oning his ain personal surveies. With the rise of the petro-chemical industries in the early 1900’s came the beginning of industrial pesticides. Up until so there was no term depicting the organic agriculture manner because no other farming manner existed. It seems that this â€Å"new revolution† isn’t so new. After all. we are fundamentally seeking to bring forth nutrient as our ascendants did before the 20th century ( Northbourne ) . Harmonizing to the United States Department of Agriculture ( USDA ) . for green goods to be certifiably organic it must follow these demands: free of sewerage sludge. man-made fertilisers. prohibited pesticides. and genetically modified beings. For farm animal to be organic they have to run into a certain wellness and public assistance criterion. be untreated by antibiotics or growing endocrines. be fed one hundred per centum organic provender and be provided entree to out-of-doorss. Food with the USDA Organic label is one 100 per centum organic and has been investigated by the USDA and warrants that the organic makings are met. Other nutrient merchandises labeled organic with out the USDA cast may or may non hold followed the antecedently listed demands. For illustration. husbandmans may label their nutrient organic that is non genuinely organic to derive concern or to sell their merchandises at a higher monetary value. Farmers may besides hold true organic merchandises but can non afford to pay the monetary value of acquiring investigated and certified by the USDA. Any merchandise that does non incorporate an official organic seal requires farther research if one is to be confident in the true quality ( Organic Standards ) . By exchanging to organic methods of farming we can assist our environment and even change by reversal the negative effects of chemical abuse. The loss of dirt birthrate and H2O taint is common amongst harvests that are treated with chemical fertilisers. but by utilizing organic techniques it eliminates these jobs. Soil birthrate and biodiversity are incorporated in organic agriculture methods. every bit good as healthy micro-organism preservation and energy efficiency. Besides. the sum of harmful chemicals being dumped in the environment is juristically decreased when big graduated table harvests are switched to organic because organic harvests don’t use these chemicals. It was estimated that 1000000s of gallons of pesticides were non released into the environment due to one purchase of an organic merchandise from the Wal-Mart Corporation. By extinguishing these unsafe chemicals we eliminate terrible unwellnesss caused by them. There are ever possibilities of nutrient borne il lnesses irrespective the method of bring forthing nutrient. but the unwellnesss present in organic nutrient are far less terrible than 1s obtained by non-organic nutrient ( â€Å"Do I Help†¦Ã¢â‚¬  ) . In footings of practicality processed nutrient is typically less expensive than organic nutrient. Processed nutrient is much cheaper due to the sheer mass of it being produced by big corporations. The affordability of processed nutrient makes it easier for low-income households to last. Though invariably devouring this type of nutrient can take to medical jobs. many people don’t have the pick to purchase healthier organic nutrient. Organic nutrient costs from 30 to ninety per centum more than processed nutrient. Surveies have shown that as more people support organic merchandises the cheaper they will acquire. but for now it is processed nutrient that keeps low-income people fed. but keeps them starved of proper foods. So how is the authorities assisting to work out the quandary of the United States’ environmental and personal wellness crisis? Well. there have really been many betterments in nutrient and environmental safety in the last 30 old ages that have had an highly good consequence to our wellness. For illustration. an environmental jurisprudence was passed in 1990 merely in the province of Washington that put a bound on the sum of agricultural chemicals to be used. on norm a husbandman had to cut down pesticide usage by 20 per centum ( State Law ) . This stopped dozenss of extra toxins from come ining the environment. More on the personal wellness note. a jurisprudence was passed in July 2003 that required fast nutrient eating houses. like McDonalds and Taco Bell. to decently label the content nowadays in their nutrient so costumiers can be more cognizant of what they are devouring ( Mello ) . Though these are great betterments. they are comparatively little. and there is still much more to be done. When it comes to developing Torahs that are good to public safety that consequence big corporations that have political ties. it becomes hard. For illustration. in 2001 two-year-old Kevin Kowalcyk died after developing hemolytic-uremic syndrome from eating a beefburger contaminated with E. coli. After this incident a jurisprudence was introduced that would Necessitate the USDA to merely develop public presentation criterions to cut down the presence of pathogens in meat. A simple solution was created to forestall a similar calamity. The meat company was non found apt. and the jurisprudence neer passed ( Food Inc. ) . Many speculate that it was due to the fiscal ties between the meat company and the USDA. but no 1 could state for certain. Unfortunately there have been other incidences similar this. where dangers are evident. but simple solutions fail to be put in action. Equally long as corporations have these political ties they have power over our wellness and safety. It is good to look into the nutrient we consume in order to guarantee our safety and good being. We can’t ever depend on our authorities to protect us from harmful nutrient merchandises because the very corporations bring forthing these nutrients are involved and have power within many political affairs. The best manner to stay safe is to do your wellness a precedence and to research the nutrient that you are devouring. The nutrient we buy affects more than what is on our dinner tabular array. Buying merchandises at the food market shop is like voting. and by buying safer merchandises we can back up the wellness of our households every bit good as the environment. Regardless if you are pro-practicality. or pro-organic it would be good to look into the nutrient we consume in order to guarantee our safety and good being. Plants Cited â€Å"Alternative Cancer Treatment | Cancer Cure | Cancer Remedies. † Do GMO Foods Cause Cancer? The Cancer Industry. 30 Mar. 2012. Web. 26 Apr. 2012. Arimi. Joshua. â€Å"Genetic Engineering of Food: The History. Science. Economicss and Controversy. † Genetic Engineering in Food- The History. Science andEconomicss. Arimi Foods. 23 Sept. 2011. Web. 26 Apr. 2012. Barling. Mannie. and Ashley F. Brooks. â€Å"Statistics on Food-borne Related Illnesses and Death Caused by Salmonella. E. Coli. Listeria. Toxoplasma. Campylobacter Bacteria. Calicivirus. or Norwalk-like Virus Emanating from Factory Farms ( CAFOs ) . Genetically Modified Foods from Monsanto and the Poor Processing of Food in America | . † Statistics on Food-bourne Illnesses. RN. 21 May 2011. Web. 26 Apr. 2012. Carson. Rachel. and Lois Darling. Silent Spring. Boston: Houghton Mifflin. 1962. Print. â€Å"Do I Help Conserve the Environment by Eating Organic Food? † Natural A ; Organic Health: Food Be nefits. Directory. Nutritional Value. Rural Tech Services. Jan. 2012. Web. 26 Apr. 2012. Drake. Susan S. â€Å"Green Labor Journal – Working for a Sustainable Future. † Green Labor Journal. Union Plus. Feb. 2012. Web. 23 Mar. 2012. Food Inc. Dir. Rob Kenner. Magnolia Pictures. Participant Media. River Road Entertainment. 2008. Film. â€Å"Genetically Modified Foods ( Biotech Foods ) Pros and Cons. † WebMD. WebMD. 2011. Web. 20 Mar. 2012. James. Brian. â€Å"Obese Chart. † The Consumer. 18 Apr. 2012. Web. 26 Apr. 2012. Lah. Katrina. â€Å"Pesticide Use Statistics. † Pesticide Use Statistics. 26 Apr. 2011. Web. 27 Mar. 2012. Mello. Michelle M. . Eric B. Rimm. and David M. Studdert. â€Å"The McLawsuit. † Health Affairs. Nov. 2003. Web. 15 May 2012. McCabe. John. and David Wolfe. Sunfood Life: Resource Guide for Global Health. Berkeley. CA: North Atlantic. 2007. Print. Northbourne. Walter J. Look to the Land. . London: Dent. 1940. Print.  "Organic Standards. † Ams. Department of Agriculture. gov. Department of agriculture: National Organic Program. 7 Feb. 2012. Web. 20 Mar. 2012. â€Å"Safe Pesticides? † Environmental News. Articles A ; Information. EcoWorld. June 2004. Web. 27 Mar. 2012. â€Å"State Law Limiting Pesticide Use. † The Tribune [ Deer Park. Washington ] 29 Aug. 1990: 5-6. Google News. Web. 15 May 2012. Disappearing of the Bees. Dir. George Langworthy. Maryam Henein. Hive Mentality Films. 2009. Film.

Thursday, November 28, 2019

Vision and Hearing free essay sample

Proprioception is like a third sensory modality that supplies feedback to the solely on the status of the body internally, the first two senses being interoceptive and exteroceptive. The proprioceptive ability that one possesses is the sense that specifies whether the body is moving at the required effort , as well as other body parts are positioned in relation to each other. The ability to estimate weight of an object, the force and time at which our muscles must be contracted are examples of our proprioceptive ability. Examples of proprioceptors are muscle spindles also called stretch receptors and their associated 1a axons. These receptors make up the somatic sensory system that is focus on body sense or proprioception. The muscle spindle consists of several types of speacialized skeletal muscle fibers that are contained within a fibrous capsule. In the middle region of this fibrous capsule group 1a axons are wrapped around the muscle fibre on the spindle. We will write a custom essay sample on Vision and Hearing or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page Group 1a axons are the fastest and largest of the group 1 axons, which are also the thickest myelinated axons in the body. When a weight is placed on a muscle , the muscle lengthen and the muscle spindles are stretched. The stretching causes of the spindle causes depolarization of the 1a axons endings, this is caused by mechanosensitve ion channels. The 1a axons enter the enter the spinal cord through the dorsal root, from here they branch repeatedly and then form synapsese on both interneurons and alpha motor neurons of the ventral horns. The alpha motor neurons react by increasing its action potential frequency, this then causes the muscle to contract. The muscle spindle also contains intrafusal fibers and receive their motor innervation by a different type of lower motor neuron called Gamma motor neurons. When the extrafusal muscle contracts it becomes shorter, the intrafusal fiber also becomes shorter, this means that the 1 axons become silent and would no longer provide information about muscle length. However this is where the gamma motor neurons become activated at innervate the intrafusal muscle fibre at either end of the spindle. This causes contraction of the muscle spindle, therefore pulling on the noncontractile equatorial region and keep the 1a axons active. Another source of proprioceptive input in the skeletal muscle is the golgi tendon organ, which monitors muscle tension or force of contraction. The golgi tendon organs are situated in series with the muscle fiber and spindle and is located at the junction of the muscle. A special feature about this source is that it is innervated by 1b sensory axons, which are a little bit smaller than 1a axons. The different anatomical arrangements between the muscle fiber and the golgi tendon organ is what distinguishes the type of information it provides. The 1a axons from the muscle spindle offers muscle length information, the golgi tendon organ give muscle tension. The 1b axons enter the spinal cord, where they branch repeatedly and then synapse on interneurons in the ventral horn. Some of these interneurons connect with the alpha motor neurons which are inhibitory. This is usually called reverse myotatic reflex. There are certain factors that can influence our perception and sensation, this includes, alcohol, drugs and nerve damage. These factors can disrupt the proprioceptive ability by decreasing the feedback quality.

Sunday, November 24, 2019

John Gresham Machen

John Gresham Machen Introduction He is a legendary American theologian and fundamentalist leader. During the last days of his life, his health deteriorated and painfully struggled against the health battle. In December 1936, his cohorts warned him against taking a trip to North Dakota because there was frigid temperature.Advertising We will write a custom essay sample on John Gresham Machen specifically for you for only $16.05 $11/page Learn More He was much determined to encourage the faithful members of the movement, which he had founded and was under immense criticisms. He had many sleepless nights worrying about the future of that movement after the defection of a great portion of membership (Stonehouse 75). His desire to stay on course compelled him to take a risk of defying the effects by foul weather to his ill-health. He ignored every deterrent to his travelling that his cohorts posed yet they were trying to protect him against the worst health conditions. He also deman ded that they proceed to Carson and Leith where he intended to meet with the members of his association, but it was noted that his health condition had started to grow worse. Surprisingly, he did not halt the journey complaining of his health rather he kept them awake with humorous stories on their way. Unfortunately, his cold rapidly became pleurisy, and the team was implored to halt their journey. They turned back seeking for medical help and, despite his illustrious agony, he said that he would not die because of much work that lied ahead of him. His agony increased compelling them to call a doctor, but he still had the strength to maintain a conversational engagement with them (Stonehouse 78). He was diagnosed for pleurisy, but his health condition seemed to grow wearier with time. They took him for further medical attention in a Roman Catholic hospital where he was admitted. Doctors changed their diagnosis from pleurisy to pneumonia, and despite the struggle he was going throug h his mind was stuck to his mission. He sent some telegraph messages to members of his association in Philadelphia and energy believed could have saved him if he committed it to the fight against illness. Early morning on January 1, 1937, he experienced some periods of lucidity and unconsciousness. At a certain interval of consciousness, he wrote a telegram with the help of a colleague that proved to his colleagues to have been his last word to the faithful. The telegram read: â€Å"I am so thankful for the active obedience of Christ. No hope without it† (Hart, ‘Doctor Fundamentalis† 64). Immediately after the telegraph was written his body became so weak that it could not withstand the rigors that had been experiencing (Hart, â€Å"Doctor Fundamentalis† 65). He remained in that poor condition for the whole day and at around 7:30 p.m. his soul departed for eternal rest.Advertising Looking for essay on religion theology? Let's see if we can help y ou! Get your first paper with 15% OFF Learn More Early life John Gresham Machen was born on July 28, 1881 to Arthur Machen and Mary Jones. He was the second born among three sons and his parents were staying in Baltimore, Maryland by the time of his birth. His was then aged forty five and mother was aged thirty four. Stonehouse posits â€Å"Arthur was born in Virginia, trained in Harvard as a lawyer and his interests were deeply rooted in classical traditions of ancient south† (134). He was good in literature and loved reading and learning new skills. For instance, he read works of Thucydides, Caesar, Greek New Testament, French literature, English literature, and Horace. In addition, he had written some detective and short stories some which won prizes and put him through Harvard law school. Astonishingly, he learnt Italian at eighties claiming to do it for the sake of fun. Gresham’s mother Mary Jones was born in Georgia. She was twenty one years young er than Arthur Machen by the time the two got married. She schooled at Wesleyan College where he gained the experience as an author after issuing The Bible in Browning in 1903. Moreover, while the husband was an, â€Å"Episcopalian, she opted to be a Presbyterian, and she taught her son shorter version and Westminster Catechism at his tender age â€Å" (hart, â€Å"Doctor Fundamentalis† 66). Gresham appreciated his close relatives but spent most of his early days with the mother. This explains the source of his passionate religious influence to the extent of forming religious movements (Hart, â€Å"Defending the Faith† 67). The Machen’s kin exhibited a sturdy association with the southern classicism, as well as, Victorianism. His parents were sturdily cultured, affluent and pious Christian faithful. They attended Franklin Street Presbyterian Church in their hometown, Baltimore. The church formed a part of the congregation of southern Presbyterian Church that was aligned with the Old School Presbyterianism of rather conservatism. His mother played a prominent role in Gresham’s acquaintance of Christian knowledge through religious training at home (Calhoun 87). Despite him going for catechism classes at Westminster, his mother obliged him to commit to memory all the teachings as well as the Kings of Israel. That formed a strong foundation of his biblical and theological knowledge. At the age of fourteen years, Gresham decided to be a follower of Christ and he started attending for church services in Franklin Street Church. Also, he developed a love for reformed faith across denominations.Advertising We will write a custom essay sample on John Gresham Machen specifically for you for only $16.05 $11/page Learn More Education Life As a young boy, Gresham was privileged to attend private school. Private schools were assumed to be for the rich although his parents were financially stable. He was a bright stude nt and his good performance in high school level enabled him to secure an opportunity at Johns Hopkins University in 1898. The university was in his neighborhood and was well known for scholarships. He did entrance examination whose results proved him deserving of a scholarship that was awarded to him (Hart, â€Å"Defending the Faith† 69). Basil Gildersleeve who was his professor, a leading scholar in United States and a member of Franklin Street Church, mentored him. He learned Latin, rhetoric and English literature and Greek while in the university. Basil Gildersleeve always emphasized on the need for interpretation and translation of texts to other languages making Gresham’s knowledge of other languages beneficial (Stonehouse 87). His minister Harris E. Kirk had suggested to him that he joins the ministry because he was a devout Christian, but he refused. His refusal was initially seen to have been because of his excellent graduate studies and desire to pursue furt her studies. Notwithstanding his negative response, Gresham signed up at the Princeton theological seminary and pursued his studies in an indiscriminate manner. Gresham did not undertake subjects that focused on homiletics and Old Testament during his first year. He termed the subjects as iniquity invention. He loved dealing with the New Testament and worked closely with B.B. Warfield, who just like him, believed that consistency is the easiest to defend thus becoming a conservative Christian faithful (Hart, â€Å"Defending the Faith† 76). He took courses in Princeton University for master’s degree though he involved himself with social activities of the seminary. He used to dine at Benham Club because people from there knew him for his stunts. His fellow students and social colleagues had known him for liveliness, good humor, and a fanatic of Princeton’s football team thereby attending football games most of which were campus games. Gresham later went to Marbu rg to pursue graduate studies after declining an offer to lecture at Princeton Seminary. He studied under Wilhelm Herrmann who bewildered him by his theological liberalism. He felt much liberalism to Christianity in the mind of his professor, William Herrmann, making him grow defensive against the faith he had been used. This made him appreciate Princeton Seminary and the professors who had taught him. He was offered one year opportunity to offer tutorials at Princeton Seminary and agreed to take it (Calhoun 87).Advertising Looking for essay on religion theology? Let's see if we can help you! Get your first paper with 15% OFF Learn More Life as an Instructor at Princeton Hart posits, â€Å"In 1906, Gresham returned to Alexander Hall and continued to take his meals in Benham Club†¦students seemed not to like his elective course and complained to his mother over the issue† (Defending the Faith 85). With time, he became the best teacher in faculty, and as a result his dropped his dream to pursue PhD degree in Germany. He was strict in grading and teaching with a goal to help students acquire the right knowledge and skills (Hart, â€Å"Defending the Faith† 89). In 1909, some students submitted their grievances to the board of directors claiming that they needed a modernized curriculum. A strong rebellion attracted the newspapers coverage when the administration refused to change the curriculum in favor of the students. Gresham sided with the administration since he believed that conservative Christianity is the best defense against religious rebellion (Calhoun 87). His support for the administration proved his maturity to the administration since he had recently gone through the same curriculum that students protested against. He later wrote three articles: â€Å"The Hymns of the First Chapter of Luke†, â€Å"The Origin of First Two Chapters of Luke†, and, â€Å"The Virgin Birth in the Second Century†. They were published in 1912 by the seminary as Princeton Theological Review. Calhoun alleges, â€Å"He also helped in developing articles â€Å"Jesus and Paul† up to the volume that seminary published in the celebration of its centennial† (75). He later published several articles in most of which he used critical arguments and utilized critical arguments (Calhoun 76). Life as a Minister and Professor On November 3, 1913, at age thirty two, Gresham was put under the care of southern Presbyterian in Baltimore and was licensed on April 22, 1914. He was not comfortable staying under southern presbytery and that rendered him to getting ordained by N orthern Presbyterian church on June 23, 1914 in New Brunswick. Faculty of seminary appointed him as an assistant Professor of New Testament, a month prior to his ordination. Additionally, he went ahead and compiled a book that accounted for the beginning of Apostle Paul’s creed. The book was made public in 1921. In the book, he responded to intellectuals who had held that Apostle Paul altered the teachings of Jesus by alleging that his restoration was the beginning of faith. Hart claims, â€Å"The book was received well by conservative Presbyterians and had many reviews in the newspapers and magazines, across the country† (Defending the Faith 76). Calhoun, David. Princeton Seminary:The Majestic Testimony, 1869-1929. New York: Banner of Truth, 1996. Print. Hart, Darryl. Doctor Fundamentalis. Baltimore: The Johns Hopkins University, 1988. Print. . Defending the Faith: J. Gresham Machen and the Crisis of Conservative Protestantism in Modern America. New York: Barker Publ ishing, 1994. Print. Stonehouse, Ned. Gresham Machen: A Biographical Memoir. Philadelphia: Westminster Theological Seminary, 1978. Print.

Thursday, November 21, 2019

Integrating Sustainable Design with Building Information modeling for Thesis - 1

Integrating Sustainable Design with Building Information modeling for Energy Management in Saudi Arabia - Thesis Example We see the royal palaces and architecture as an example of art. Stone, sand, clay and wood were the construction materials and architects tried to build royal palaces which required less artificial lightening at day time. The architects made the palaces airy. With the improvement in the building materials and construction time, building home became an easier job. Energy solved many problems in building and designing a home. Artificial lightening and air conditioning create a home that is a luxurious shelter. Lightening and air conditioning require more energy and to generate more energy, there is a need to burn more coal to supply the demanded power. The burning of coal produces tons of green house gases that endanger many animal and plant species of the world. In the modern world, people think about the environment and relation of environment with our home. To achieve environmental and economic sustainability, one has to construct such home that has the modern luxuries but has reduc ed carbon emissions. Green home designs are presented and explained why it is necessary for the world in which we are all living. Green home has higher energy efficiency and it utilizes natural and biodegradable materials. There materials have positive impact on the environment and produce less waste. ... Sustainability is the capacity of the system to sustain. If certain system has plenty of resources that are not being consumed by the usage in various works, the system is said to be a sustainable system. Solar energy can be utilized in various works but the energy from the sun never ends, in this way solar energy is a sustainable energy and it is the sustainability of the system to utilize the solar energy. Wind is also a sustainable source of energy. The energy produced by the solar or wind does not produce any green house gases. There is a need for the sustainability as we are facing the worst era today, air and water pollution has raised the temperature of the earth and caused depletion in the ozone layer. The depletion of the ozone layer permits ultra-violet rays to enter the earth’s atmosphere. The penetration of these ultra-violet rays causes a raise in the temperature of the world. The raise in the temperature causes glaciers all around the world to melt at a faster sp eed that significantly raised the sea water level at some places. To reverse or to stop all these reactions, there is a need to stop the emission of green house gases. Sustainability is the key to have all the luxuries of life without the flow of green house gases. We need sustainability to save us and to save all the other creatures of the world. We have to protect our forests and water resources. Homes are represented as our shelters and as the major consumers of energy. In lightening and maintaining a home, energy is required. Natural gas, wood and heating oil are required by the homes to heat the home in colder regions. But preference should be given to the natural gas (if available), if there is a need to warm up the home as natural gas produces less green

Wednesday, November 20, 2019

Questions answered Essay Example | Topics and Well Written Essays - 1250 words

Questions answered - Essay Example It is important to note that an proposition ought to have a defined timeframe upon which it will expire or be invalidated. Secondly, a contract must involve consideration (Meiners, Ringleb and Edwards, 2014). By delineation, consideration denotes the damages on the part of the supplier or entity giving the promise and must be quantifiable financially. The third element as posited by Meiners, Ringleb and Edwards (2014) is contractual capacity. There are legal guidelines outlining the qualifications of an individual with the capacity to enter into an contract with another person. For instance, an individual must be an adult or have attained eighteen years of age, be mentally upright. Any contract enterer with an individual who has not met the specified qualifications is invalid. The fourth element of a contract is legality (Meiners, Ringleb and Edwards, 2014). In this regard, the involved entities must be ready to bind their agreement legally. In case one of the parties does not deliver in regard to the promises outlined in the contract, the legality of the contract gives the party the basis to initiate legal proceedings against the other party. Fifthly, there must be a valid consent to enter into an agreement (Meiners, Ringleb and Edwards, 2014). By explanation, this means that an individual should individually assent to the agree ment without being forced. Informed consent is defined by Schermer (2002) as the practice or procedure through which a medical practitioner reveals all information relating treatment to a patient with an intention of providing him or her with all the relevant information required to make a choice to either allow or reject treatment. Patients, according to Schemer (2002), have a legal right to determine the type of treatment they prefer and it is the obligation of the physician to respect the patients decision. In order for a patient to allow or refuse treatment, the physician involved must explain in detail the

Monday, November 18, 2019

Motivations of Managers in Small- and Large Firms Essay

Motivations of Managers in Small- and Large Firms - Essay Example In the paper, the importance and major differences between small- and large-scale businesses such as in the case of the multinational corporations (MNCs) and the transnational corporations (TNCs) will be thoroughly discussed. In the process of going through the discussion, the differences between the ownership, goals, and business organization including the business activity of small- and large-scale businesses’ external environments will be compared and contrast. Whether a company is small- or large-scale, the main purpose of establishing a business is to earn large sum of profit. Earning profit is not solely dependent on increasing the company’s annual sales; the ability of the manager to maximize the use of its existing resources also contributes a lot when it comes to increasing the amount of profit a company can generate each year. Aside from the major differences between the corporate structure of small firms and large multinational companies, it is a common knowl edge that large-scale businesses are able to acquire bigger credit line from banks as compared to small-scale businesses. Since small-scale businesses have limited financial resources that can be used in operating the business. For this reason, managers within small-scale companies are not fully able to maximize the benefit of economies-of-scale. Large-scale businesses such as in the case of multinational corporations (MNCs) and the transnational corporations (TNCs) have the financial capacity to invest on highly competitive human resources and support employees’ needed training and development aside from investment on newly innovated communication and production technology and the benefit of purchasing of raw materials by bulk.

Friday, November 15, 2019

The Best Motion Picture: Jurassic Park

The Best Motion Picture: Jurassic Park The film that I think should be awarded for The Best Motion Picture is Jurassic Park because of the excellent filming techniques, terminologies and an effective storyline about Dinosaurs that was used to make it the best and captivating film. It keeps the viewers captivated and riveted to their seats The action keeps the audience in suspense and has great effects. The dinosaurs looked real and sounded similar to what we would think real dinosaurs sounded. in the film, it wasis a very unique effect that made the actual animals look very realistic. The construction of the dinosaurs and the whole dinosaur park brought the film to life and made the viewer feel as if they are part of the investigation in the film and also made the entire film seem modern even though dinosaurs dont exist in our generation. An extreme long shot was used by showing the audience the island where Jurassic Park was situated and this is where all the action takes place. At the beginning of the film we get an eye level shot of the dinosaur when it was it the cage being transferred to an enclosure. This make the audience feel insecure and think that it is looking at us. We also get a long shot when the characters walked towards the cars outside before they went to explore the park and see all the various dinosaurs. We also get an extreme close up shot of the mosquito in the resin that was attached to Johns walking stick is very effective as the backlight and zooming adds a clear view of the mosquito and symbolizes its importance. We also get a birds eye view when the people were the dusting the ground of the dinosaurs bones. It makes the audience seem like they are looking down at the people and the dinosaurs fossil being dusted. The sound in the film is plays an extremely important role in the film. The sound in Jurassic Park has a huge impact on the viewer. The synchronous sound is used brilliantly when the huge T-Rex engulfs one of the visitors in the park. The sound in the film made a enormous impact on the viewers, especially when the director uses non-synchronous sound as the actors enter the dinosaur park and also when the children are being chased. The sound gives the viewer an emotion of uncertainty and fear. There are no wild sounds in the film but there are a few scenes with a voice over such as the scene where the people are trying to escape from the T-Rex as it chases them through the park. The voices of the victims are louder than the synchronous sound. The dolly shot was used when Ian was in the car with the lady and other tour guides. The camera is placed on a track and set to move at the same speed as the dinosaur. This makes the viewer very involved and close to the scene. A crane shot is used when the lady was trying to get away and get to the technical room. The camera was placed in the air and this made us as the viewers feel like she was being chased by a dinosaur. Zoom lenses, when the goat was placed in the T-Rexs area and it zoomed in and out. This gives the audience an impression that the goat will be eaten and this creates excitement in the scene where it does not exist. We get a aerial shot of the island and it is taken via a helicopter but it does not show us the whole island as one and this conveys real drama and exhilaration. The colours used on the logo Jurassic Park attract the eye because of the use of bright colours and it is a reminder to the audience about where the action takes place. The green forests within the dinosaur park show that the park is a luscious ground for herbivores and a typical environment in which dinosaurs would live in. The use of colours sets the tone, and Jurassic Park has many different colours which I think makes the film more realistic, exciting and adventurous. The dull colours in the background made the logo stand out. The costumes of the characters were not too stylish but they stood out from the bright evergreen island and made them look neutral. The helmets and multi coloured cars made their clothing stand out more. The key light made us see the characters and dinosaurs no matter what the conditions were in the film and the fill light added some shadow. This is shown when T-Rex escapes and there is a spotlight shone on it. They also turned down the fill light when the two philosoraptors entered the kitchen and then increase the amount of shadow; this helped by creating suspense and tension to the audience. There arent many icons, indexes and symbols in the film, but one symbol that symbolizes fear is the dinosaur footprint because dinosaurs are dangerous animals and are threats to humans. I think that Jurassic Park is a great movie because the action of the film keeps the audience in suspense throughout the film and has great effects that make the audience riveted to their seats. The dinosaurs looked and sounded real. Steven Spielberg really deserves to win the Best Picture Award for Jurassic Park as he is a creative director and his film will never become outdated and will always have an exciting storyline.

Wednesday, November 13, 2019

Figuring Out My World: Alison May Essay -- Disease/Disorders

Figuring Out My World: Alison May Alison’s story is the perfect example of what many families must go through when faced with the possibility of having a child diagnosed with a learning disability. Alison was not diagnosed with visual and auditory dyslexia until the summer before entering college. However, while still a toddler, her symptoms had been brought to her mother’s attention by her sister’s teacher. Alison’s mother then noticed her habits in repeating words incorrectly and how Alison would need tactile clues to follow directions. At the recommendation of her kindergarten teacher, Alison was tested for learning disabilities and the results from the school psychologists were that she was acting stubborn or disobedient. Her family did not stop with the school’s diagnosis. They had private testing completed that confirmed Alison did not have a specific learning disability. The final word came from a relative that happened to be a psychologist. He insisted Alison would grow out of her difficulties. So Alison continued on with her entire elementary, middle and high school journey as a student and daughter with an undiagnosed learning disability. Alison spent 12 years of her life learning how to learn. She was comfortable with conversation, but could not understand directions. This caused her a lot of self-esteem issues as a young child trying to fit in with all the other kids. She felt an enormous amount of pressure at both school and home. At age seven, she finally came to the realization that she just did not understand. That is when she began to develop coping mechanisms like asking others to repeat and clarify directions, spoken or written. She used the cues of those around her, and observed her classmates and reactions... ...yslexia http://www.tsrhc.org/dyslexia-take-flight.htm †¢ Intel Reader from Intel-GE Care Innovation http://www.careinnovations.com/assistive-reading-technology Agencies for Dyslexia †¢ Catapult Learning http://www.catapultlearning.com/ †¢ Children’s Dyslexia Centers of New Jersey http://www.mlcnj.org/ †¢ Dyslexia My Life http://dyslexiamylife.org/resour3.html †¢ Bridges4Kids http://www.bridges4kids.org/states/nj.htm †¢ National Disability Rights Network http://www.ndrn.org/ Organizations for Dyslexia †¢ Dyslexia International http://www.dyslexia-international.org/index.html †¢ The International Dyslexia Association http://www.interdys.org/ †¢ American Dyslexia Association http://www.american-dyslexia-association.com/ †¢ Davis Dyslexia Association International http://www.dyslexia.com/ †¢ National Center for Learning Disabilities http://www.ncld.org/

Sunday, November 10, 2019

Importance of breakfast Essay

Wonder why your mom is behind you everyday insisting on having breakfast when you leave for college, school or work? Well she is right. Breakfast is important for each one of us. Let’s find out how? Breakfast which literally means breaking an overnight fast is the first meal of the day. This is the most important meal of the day. According to Ayurveda as well, food is digested best in the morning. Thus, heavy food stuffs like paranthas, laddoos, etc. can be had in the morning. This is because body is constantly using up energy during night for important body functions like pumping blood to all the parts of the body, breathing, etc. Our body is starving the entire night while we are sleeping and using up stored energy for performing the vital functions. Hence, fueling body early morning is extremely important to maintain adequate blood sugar levels and prevent the body from fatigue and tiredness entire day. Breakfast is extremely important for children and adolescents, as children who have a proper breakfast are more likely to have better concentration, problem solving skills and better coordination than children who skipped breakfast. Breakfast eaters are at a lower risk of gaining weight compared to those who skip breakfast. This is because breakfast reduces hunger throughout the day and these people make better choices at lunch and other meals. On the contrary, people who skip breakfast thinking they can save a few calories tend to eat more at lunch and other meals leading to weight gain. The last meal for us is dinner and the difference between dinner and breakfast is nearly twelve hours. A person who skips breakfast, for him, this duration extends up to 16 hours approximately. Our body is constantly at work and needs energy for the same. If we extend the difference between our dinner and next meal, chances are there that we might get fatigued and tired early. This affects the quality of work we do. Hence for this, breakfast is needed. Research says that people who eat breakfast have healthier diet overall. They eat healthy and make healthier choices. Those who consume breakfast cereals consume more vitamins and minerals needed for body function. Breakfast also plays a role in improving mood as complex carbohydrate cereals have a positive effect on mood. Research published in American Journal of Clinical Nutrition states that â€Å"breakfast omission is associated with an increased risk of Type 2 Diabetes in men.† Having seen the benefits of breakfast, let us check out some healthy breakfast options: †¢Oats in milk with apple. †¢Egg white omellete with chapatti and orange juice. †¢Moong cheela and milkshake. †¢Cottage cheese (paneer)/sprouts parantha with lassi (low fat and sugar). †¢Broken wheat dalia/lapsi and buttermilk alongwith a fruit. †¢Oil free ragi idli/dosa with sambhar alongwith a fruit. Breakfast provides essential nutrients so that you can start off your day well and stay energetic throughout. Skipping breakfast will have detrimental effect on health in the long run. So, eat your breakfast wisely and stay energetic! Do you know that breakfast is the most important meal of the day? A good breakfast provides the nutrients that people need to start their day off right. Studies show that children who eat a good breakfast do better in school than children who do not. Studies also show a link between participation in the School Breakfast Program and improved academic performance and psycholsocial behavior. Children who eat a good breakfast tend to perform better in school, and have a better attendance and decreased hyperactivity. Children who don’t eat breakfast tend to perform not as well, and also tend to have behavior problems such as fighting, stealing, and not listening to their teachers (Dr. Ronald Kleinman, Harvard Medical School). The School Breakfast Program provides a nutritious meal to children who might otherwise not eat breakfast, and is designed to provide children with one-fourth of their daily nutrients. This program offers fruit, cold cereal and milk daily, and some Coldwater Schools buildings also offer a variety of hot dishes like breakfast pizza, pancake wraps, and oatmeal. If your child eats breakfast at home, choose a breakfast with milk, fruit and cereal(grain product). These three foods can provide for a good breakfast. Eating a healthy breakfast does not need to take a lot of time. In the next column you will find some quick and healthy breakfast ideas. The importance of breakfast Everyone knows that the key to successful weight loss is a combination of regular exercise, healthy eating and a positive mind. There’s no point working out five days a week if you’re going to give in to temptation and inhale three kebabs and a packet of Tim Tams when you get home. Healthy eating doesn’t necessarily mean dieting, it refers instead to eating sensible, balanced amounts of the right foods at the right times – and that includes a good breakfast. There’s a reason why people have said for many years that â€Å"breakfast is the most important meal of the day†. After six, seven or eight hours – if you’re lucky – of sleep, your body and brain need some fuel to power and prepare them for the day ahead. Like a car, you can’t run on an empty tank, you need some petrol. It’s a well known fact that people who eat breakfast lose more weight than people who don’t and this is  due to several reasons : 1.Breakfast provides the energy your body requires in order to perform activities. Therefore, you’re not so tired and can do more. 2.It kickstarts the body into producing the enzymes needed to metabolise fat, helping to shed the pounds. 3.Eating a good breakfast keeps you full for longer and may make you less likely to reach for snacks. In addition, breakfast is generally good for you. Those who eat breakfast are 50 per cent less likely, according to US researchers, to have blood sugar problems. Consequently, they have a decreased risk of developing diabetes or having high cholesterol levels which could lead to heart disease. Also, some breakfast foods such as grains, seeds and dried fruit provide vitamins and minerals that are hard to find in other foods. People who don’t eat breakfast often complain that it’s â€Å"too early† to eat or that they don’t have time in the morning. Paltry excuses! Ways to rectify this include not eating too late the evening before, going to bed earlier or eating breakfast on the train/ bus on the way to work. Who wouldn’t want to eat breakfast with such an array of delicious morning munchies options available? Uninspired? Try some of these: Make your own muesli by toasting some oats, then adding seeds, nuts and fruit as desired. Slice a banana on top, garnish with blueberries and pour on some yoghurt . This high fibre option will keep you full until lunchtime and the nutrients derived from the seeds, nuts and fruit will do all sorts of good. Alternatively, how about blasting lots of lovely fruit up into a smoothie, which you could drink on the way to work? Smoothies are far more filling than you might imagine and allow you to be really creative. Experiment with flavour combinations, thin out a little with water, milk, fruit juice or low-fat yoghurt and enjoy. For traditionalists, two slices of wholemeal bread with a scraping of butter and some Vegemite or a bowl of whole-grain cereal is fine. Top with honey or dried fruit for sugar and splash over some semi-or skimmed milk to reduce the refined sugar and fat content. To say breakfast is the â€Å"most important† meal of the day underplays how significant it really is. Providing energy, nutrients and warding off chronic conditions, can you really afford not to eat it? mproved Grades Eating breakfast can improve cognitive performance, test scores and achievement scores in students, especially in younger children. According to  a study published in the journal â€Å"Archives of Pediatrics and Adolescent Medicine,† students who increased their participation in school breakfast programs had significantly higher math scores than students who skipped or rarely ate breakfast. As an added benefit, the group of students who increased breakfast participation also had decreased rates of tardiness and absences. Increased Concentration Students who eat a low-glycemic, balanced breakfast may have better concentration and more positive reactions to difficult tasks than students who eat a carbohydrate-laden breakfast. According to research published in â€Å"Physiology and Behavior,† students given a low-glycemic breakfast were able to sustain attention longer than children given a high-glycemic breakfast. Children following the low-glycemic breakfast plan also had improved memory and fewer signs of frustration when working on school tasks. Try old-fashioned oatmeal with a handful of walnuts or some scrambled eggs with spinach, peppers and a sprinkle of cheese. Weight Maintenance Eating breakfast regularly may also help students maintain a healthy weight. According to a study published in â€Å"Public Health Nutrition,† children who skipped breakfast in the morning were more likely to overeat and have a lower overall diet quality than children who ate breakfast every day. This led to increased body mass index, or BMI, measurements. Considerations While eating any breakfast is better than skipping breakfast altogether, some choices are better than others. Carbohydrate-only breakfasts, such as bagels and toast, can give energy for one to two hours, while complete breakfasts that contain a balance of protein, fat and carbohydrates can keep blood sugar levels steady for hours, according to MealsMatter.org. Try some toast with peanut butter and a piece of fruit or cereal with milk and glass of 100 percent fruit juice. If you have time, make an omelet with cheese, broccoli and some turkey bacon.

Friday, November 8, 2019

Argot Definition and Examples

Argot Definition and Examples Argot is a specialized vocabulary or set of idioms used by a particular social class or group, especially one that functions outside the law. Also called cant and cryptolect. French novelist Victor Hugo observed that argot is subject to perpetual transformation- a secret and rapid work which ever goes on. It makes more progress in ten years than the regular language in ten centuries (Les Misà ©rables, 1862). ESL specialist Sara Fuchs notes that argot is both cryptic and playful in nature and it is . . . particularly rich in vocabulary referring to drugs, crime, sexuality, money, the police, and other authority figures (Verlan, lenvers, 2015). Etymology From the French, origin unknown Examples and Observations The Argot of the RacetrackThe argot of the racetrack is responsible for piker small town gambler, ringer illegally substituted horse, shoo-in fixed race, easy win, and others.(Connie C. Eble, Slang Sociability. UNC Press, 1996)The Argot of PrisonersPrison argot, originally defined as the jargon of thieves, is a particular form of slang (Einat 2005)- in some circumstances, a complete language- capable of describing the world from the perspective of the prison. It has been argued that prisoners live, think, and function within the framework defined by the argot (Encinas 2001), whose vocabulary may supply alternative names for objects, psychological states of minds, personnel roles, situations and the activities of prison life. Experienced inmates use argot fluently and can switch between regular names and their argot counterparts, and the degree of familiarity with argot is an important symbol of group membership among prison inmates (Einat 2005).(Ben Crewe and Tomer Einat, Argot (Pri son).Dictionary of Prisons and Punishment, ed. by Yvonne Jewkes and Jamie Bennett. Willan, 2008) The Argot of Pool PlayersThe poolroom hustler makes his living by betting against his opponents in different types of pool or billiard games, and as part of the playing and betting process he engages in various deceitful practices. The terms hustler for such a practice and hustling for his occupation have been in poolroom argot for decades, antedating their application to prostitutes.Like all other American deviant argots I know of, [hustlers argot] also reveals numerous facets that testify against a secrecy interpretation. Some examples: (1) Hustlers always use their argot among themselves when no outsiders are present, where it could not possibly have a secretive purpose. (2) The argot itself is not protected but is an open secret, i.e., its meanings are quite easily learned by any outsider who wishes to learn them and is an alert listener or questioner. (3) The argot is elaborated far beyond any conceivable need to develop a set of terms for deviant phenomena, and even far beyond any need to develop a full-scale technical vocabulary . . ..(Ned Polsky, Hustlers, Beats, and Others. Aldine, 2006) The Argot of Card PlayersA cardsharp who is out to cheat you may be dealing from the bottom of the deck and giving you a fast shuffle, in which case you may get lost in the shuffle. You might call such a low-down skunk a four-flusher. Flush, a hand of five cards all of one suit, flows from the Latin fluxus because all the cards flow together. Four-flusher characterizes a poker player who pretends to such good fortune but in fact holds a worthless hand of four same-suit cards and one that doesnt match.All of these terms originated with poker and other betting card games and have undergone a process that linguists call broadening. A good example of movement from one specific argot to another is wild card berth or wild card player as used in football and tennis. In these sports, a team hopes for back-to-back victories- from a fortuitous ace-down-ace-up as the first two cards in a game of five-card stud.(Richard Lederer, A Man of My Words. Macmillan, 2003)The Lighter Side of ArgotA strea k of humour runs through the traditional argot. Prisons were often described as schools, as in the contemporary College of Correction, and the hulks used to accommodate prisoners were the floating academies. Brothels were convents or nunneries, the prostitutes who worked in them were nuns, and the madam was an abbess.(Barry J. Blake, Secret Language. Oxford University Press, 2010) Pronunciation: ARE-go or ARE-get

Wednesday, November 6, 2019

Caligula essays

Caligula essays Caligula has been known to history as a colorful emperor. He is the youngest son of Germanicus Ceaser and the grandnephew of Tiberius. Caligula as a child was said to have been very ill with a high fever that probably affected his mind. That would explain all the strange things that he did throughout his life. Gaius was given the name Caligula (Latin for little boot) in the military camps where he spent some of his early life. He succeeded his granduncle Tiberius in the year 37. Caligula was very popular with the army at first since he had served himself. Unlike Tiberius, Caligula was not concerned with having a surplus in the Roman treasury. Soon after he came to power, he began to throw lavish festivals and gladiator games. The people of Rome knew that he was depleting the treasury, and quickly became unpopular. He soon answered them with acts of tyranny and began to have people executed at whim. He banished or murdered most of his own relatives. Caligula committed incest with two of his three sisters. His retreat is to Capri where he indulges himself in immoral acts In 37, Caligula becomes ill, and tells all that he is not really ill, but is metamorphasizing into a god. He then forces the Senate to deify himself and his three sisters. He also forces the Sensate to make his horse a Senator on the grounds that it is Alexander The Greats horse reincarnated. This all becomes too much for the people of Rome. The leader of the Praetorian Guard leads a revolt and Caligula is assassinated on January 24, 41. ...

Monday, November 4, 2019

Financial Management Essay Example | Topics and Well Written Essays - 3000 words

Financial Management - Essay Example on lies with the management in procuring funds from economic sources, it is also necessary to consider the effects of such acquisitions from the company’s point of view. Therefore, the financial sagacity of any business imply in how economically such funds are procured. It includes administration and maintenance of financial assets. From the point of view of the organisation financial management is the processes associated with the mobilization of funds from the various sources when needed at an acceptable cost which is called Financing Decision through banks and other financial institutions, and control the fund flow by monitoring their use to ensure the procurement and deployment of funds according to the plan. (Financial Management). Northern PLC is a manufacturing company which has sub-divisions globally. The company is facing deficiency of funds. There are many sources for financing the company through Capital market (through Shares, Debt Securities, and Venture Capital). Capital market is the place where government, institutions and individuals trade financial securities for funds. Two major capital markets are stock and bond market. Some of the examples of the capital are New York Stock Exchange (NYSE), American Stock Exchange (AMEX). It provides economic efficiency by channelling money from those who have no immediate productive use. In capital market the cash or savings which is risk free are converted into risky assets for the benefit in the future. If the company is not performing well then there will be decrease in the share price of the company which would result in the dissatisfaction of the shareholders like the suppliers, customers and other stakeholders of the company. If the company is performing well, the share price increases and because of this the shareholder gets the benefit as dividends. The company that is giving the regular dividend has a slight edge. (Woepking). 1. Weak form efficiency- In this the share price reflects the

Friday, November 1, 2019

Strategic Management of health care organizations Essay

Strategic Management of health care organizations - Essay Example To do this, different processes associated with service delivery like efficient patient flow, wait times and various administrative functions have been addressed. This has led to different implementation strategies like pre-service, point-of-service, and after-service has been devised. All the areas of service are meant to provide the customers with valued services. While pre-service is prior to the encounter, point-of-service (POS) is at the time of the encounter and post-service is after the encounter. Different healthcare organizations have benefited by aligning strategies based on different encounters. The pre-service is devised after determining the customers’ wants and needs. This requires first determining who the customers are, the price acceptable to them, the time and location convenient to them and then developing internal culture that focuses on customers (SDS, n.d.). Customer and competitor descriptions are essential to decide on this service area. The basic premise is – what does the customer want in terms of product, price, place and promotion. A thorough market research is essential for this. This is then followed by market segmentation based on clinical areas, demographics, psychographics and markets defined by growth opportunities. A customer analysis is then done to determine which should be the target market. It also determines what motivates the individual to use health care and what aspects of services offered are really important to the customer. Whether the customer is currently satisfied is determined which helps to improve upon the clinical serv ice. It also determines on what basis the customer chooses one organization over another. Hence the central issue in this service area is determining the right customer and devising the rest of the strategies based on that. For POS the central issues are quality, efficiency, innovation and flexibility. The internal assessment of

Wednesday, October 30, 2019

Strategic Management Coursework Example | Topics and Well Written Essays - 3000 words

Strategic Management - Coursework Example The rising consumer needs in the developing as well as the developed markets is generating a uniform business opportunity. Companies of local as well as international origin are actively focusing on entering newer markets as well as expanding their presence in existing markets so as to capitalize the newly emerging business opportunities. The business firms of the 21st century are actively focusing on radical as well as disruptive innovation so as to effectively fulfill the needs of the masses. It is important to highlight that the because of the presence of multiple firms offering homogenous products and services, the competition in the market is extremely high. The availability of similar kinds of product and service offerings are resulting in the increase in power for the buyers. It has to be said that to deal with the intense market competition as well as to retain their competitive edge, the organizations needs to design as well as execute successful strategies. This particular assignment focuses on the aspects of strategy development, cutting edge technology as well as the sustainable competitive advantage which are necessary for present day organizations. Traditionally, organizations around the world follow a well designed hierarchy, the top of which is often tasked with the responsibility of strategy planning as well as implementation. For the implementation as well as execution of strategies, companies in various corners of the world often follow the usual one way top down implementation approach. Over the course of execution of business, there have often been doubts about whether it is possible to design effective strategies without following the traditional top down route. In order to find a satisfying answer to this particular focus, it is important to highlight that there is a high level of persistence that is associated with the hierarchical concept of an organization. Organizations which

Monday, October 28, 2019

Approaches to the Analysis of Survey Data Essay Example for Free

Approaches to the Analysis of Survey Data Essay 1. Preparing for the Analysis 1.1 Introduction This guide is concerned with some fundamental ideas of analysis of data from surveys. The discussion is at a statistically simple level; other more sophisticated statistical approaches are outlined in our guide Modern Methods of Analysis. Our aim here is to clarify the ideas that successful data analysts usually need to consider to complete a survey analysis task purposefully. An ill-thought-out analysis process can produce incompatible outputs and many results that never get discussed or used. It can overlook key findings and fail to pull out the subsets of the sample where clear findings are evident. Our brief discussion is intended to assist the research team in working systematically; it is no substitute for clear-sighted and thorough work by researchers. We do not aim to show a totally naà ¯ve analyst exactly how to tackle a particular set of survey data. However, we believe that where readers can undertake basic survey analysis, our recommendations will help and encourage them to do so better. Chapter 1 outlines a series of themes, after an introductory example. Different data types are distinguished in section 1.2. Section 1.3 looks at data structures; simple if there is one type of sampling unit involved, and hierarchical with e.g. communities, households and individuals. In section 1.4 we separate out three stages of survey data handling – exploration, analysis and archiving – which help to define expectations and procedures for different parts of the overall process. We contrast the research objectives of description or estimation (section 1.5), and of comparison  (section 1.6) and what these imply for analysis. Section 1.7 considers when results should be weighted to represent the population – depending on the extent to which a numerical value is or is not central to the interpretation of survey results. In section 1.8 we outline the coding of non-numerical responses. The use of ranked data is discussed in brief in section 1.9. In Chapter 2 we look at the ways in which researchers usually analyse survey data. We focus primarily on tabular methods, for reasons explained in section 2.1. Simple one-way tables are often useful as explained in section 2.2. Cross-tabulations (section 2.3) can take many forms and we need to think which are appropriate. Section 2.4 discusses issues about ‘accuracy’ in relation to two- and multi-way tables. In section 2.5 we briefly discuss what to do when several responses can be selected in response to one question.  © SSC 2001 – Approaches to the Analysis of Survey Data 5 Cross-tabulations can look at many respondents, but only at a small number of questions, and we discuss profiling in section 2.6, cluster analysis in section 2.7, and indicators in sections 2.8 and 2.9. 1.2 Data Types Introductory Example: On a nominal scale the categories recorded, usually counted, are described verbally. The ‘scale’ has no numerical characteristics. If a single oneway table resulting from simple summarisation of nominal (also called categorical) scale data contains frequencies:Christian Hindu Muslim Sikh Other 29 243 117 86 25 there is little that can be done to present exactly the same information in other forms. We could report highest frequency first as opposed to alphabetic order, or reduce the information in some way e.g. if one distinction is of key importance compared to the others:Hindu Non-Hindu 243 257 On the other hand, where there are ordered categories, the sequence makes sense only in one, or in exactly the opposite, order:Excellent Good Moderate Poor Very Bad 29 243 117 86 25 We could reduce the information by combining categories as above, but also we can summarise, somewhat numerically, in various ways. For example, accepting a degree of arbitrariness, we might give scores to the categories:Excellent Good Moderate Poor Very Bad 5 4 3 2 1 and then produce an ‘average score’ – a numerical indicator – for the sample of:29 Ãâ€" 5 + 243 Ãâ€" 4 + 117 Ãâ€" 3 + 86 Ãâ€" 2 + 25 Ãâ€" 1 29 + 243 + 117 + 86 + 25 = 3.33 This is an analogue of the arithmetical calculation we would do if the categories really were numbers e.g. family sizes. 6  © SSC 2001 – Approaches to the Analysis of Survey Data The same average score of 3.33 could arise from differently patterned data e.g. from rather more extreme results:Excellent Good Moderate Poor Very Bad 79 193 117 36 75 Hence, as with any other indicator, this ‘average’ only represents one feature of the data and several summaries will sometimes be needed. A major distinction in statistical methods is between quantitative data and the other categories exemplified above. With quantitative data, the difference between the values from two respondents has a clearly defined and incontrovertible meaning e.g. â€Å"It is 5C ° hotter now than it was at dawn† or â€Å"You have two more children than your sister†. Commonplace statistical methods provide many well-known approaches to such data, and are taught in most courses, so we give them only passing attention here. In this guide we focus primarily on the other types of data, coded in number form but with less clear-cut numerical meaning, as follows. Binary – e.g. yes/no data – can be coded in 1/0 form; while purely categorical or nominal data – e.g. caste or ethnicity – may be coded 1, 2, 3†¦ using numbers that are just arbitrary labels and cannot be added or subtracted. It is also common to have ordered categorical data, where items may be rated Excellent, Good, Poor, Useless, or responses to attitude statements may be Strongly agree, Agree, Neither agree nor disagree, Disagree, Strongly disagree. With ordered categorical data the number labels should form a rational sequence, because they have some numerical meaning e.g. scores of 4, 3, 2, 1 for Excellent through to Useless. Such data supports limited quantitative analysis, and is often referred to by statisticians as ‘qualitative’ – this usage does not imply that the elicitation procedure must satisfy a purist’s restrictive perception of what constitutes qualitative research methodology. 1.3 Data Structure SIMPLE SURVEY DATA STRUCTURE: the data from a single-round survey, analysed with limited reference to other information, can often be thought of as a ‘flat’ rectangular file of numbers, whether the numbers are counts/measurements, or codes, or a mixture. In a structured survey with numbered questions, the flat file has a column for each question, and a row for each respondent, a convention common to almost all standard statistical packages. If the data form a perfect rectangular grid with a number in every cell, analysis is made relatively easy, but there are many reasons why this will not always be the case and flat file data will be incomplete or irregular. Most importantly:-  © SSC 2001 – Approaches to the Analysis of Survey Data 7 †¢ Surveys often involve ‘skip’ questions where sections are missed out if irrelevant e.g. details of spouse’s employment do not exist for the unmarried. These arise legitimately, but imply different subsets of people respond to different questions. ‘Contingent questions’, where not everyone ‘qualifies’ to answer, often lead to inconsistent-seeming results for this reason. If the overall sample size is just adequate, the subset who ‘qualify’ for a particular set of contingent questions may be too small to analyse in the detail required. †¢ If some respondents fail to respond to some questions (item non-response) there will be holes in the rectangle. Non-informative non-response occurs if the data is missing for a reason unrelated to the true answers e.g. the interviewer turned over two pages instead of one! Informative non-response means that the absence of an answer itself tells you something, e.g. you are almost sure that the missing income value will be one of the highest in the community. A little potentially informative non-response may be ignorable, if there is plenty of data. If data are sparse or if informative  non-response is frequent, the analysis should take account of what can be inferred from knowing that there are informative missing values. HIERARCHICAL DATA STRUCTURE: another complexity of survey data structure arises if the data are hierarchical. A common type of hierarchy is where a series of questions is repeated say for each child in the household, and combined with a household questionnaire, and maybe data collected at community level. For analysis, we can create a rectangular flat file, at the ‘child level’, by repeating relevant household information in separate rows for each child. Similarly, we can summarise information for the children in a household, to create a ‘household level’ analysis file. The number of children in the household is usually a desirable part of the summary; this â€Å"post-stratification† variable can be used to produce sub-group analyses at household level separating out households with different numbers of child members. The way the sampling was done can have an effect on interpretation or analysis of a hierarchical study. For example if children were chosen at random, households with more children would have a greater chance of inclusion and a simple average of the household sizes would be biased upwards: it should be corrected for selection probabilities. Hierarchical structure becomes important, and harder to handle, if there are many levels where data are collected e.g. government guidance and allocations of resource, District Development Committee interpretations of the guidance, Village Task Force selections of safety net beneficiaries, then households and individuals whose vulnerabilities and opportunities are affected by targeting decisions taken at higher levels in the hierarchy. In such cases, a relational database reflecting the hierarchical 8  © SSC 2001 – Approaches to the Analysis of Survey Data structure is a much more desirable way than a spreadsheet to define and retain the inter-relationships between levels, and to create many analysis files at different levels. Such issues are described in the guide The Role of a Database Package for Research Projects. Any one of the analysis files   may be used as we discuss below, but any such study will be looking at one facet of the structure, and several analyses will have to be brought together for an overall interpretation. A more sophisticated approach using multi-level modelling, described in our guide on Modern Methods of Analysis, provides a way to look at several levels together. 1.4 Stages of Analysis It is often worth distinguishing the three stages of exploratory analysis, deriving the main findings, and archiving. EXPLORATORY DATA ANALYSIS (EDA) means looking at the data files, maybe even before all the data has been collected and entered, to get an idea of what is there. It can lead to additional data collection if this is seen to be needed, or savings by stopping collecting data when a conclusion is already clear, or existing results prove worthless. It is not assumed that results from EDA are ready for release as study findings. †¢ EDA usually overlaps with data cleaning; it is the stage where anomalies become evident e.g. individually plausible values may lead to a way-out point when combined with other variables on a scatterplot. In an ideal situation, EDA would end with confidence that one has a clean dataset, so that a single version of the main datafiles can be finalised and ‘locked’ and all published analyses derived from a single consistent form of ‘the data’. In practice later stages of analysis often produce additional queries about data values. †¢ Such exploratory analysis will also show up limitations in contingent questions e.g. we might find we don’t have enough currently married women to analyse their income sources separately by district. EDA should include the final reconciliation of analysis ambitions with data limitations. †¢ This phase can allow the form of analysis to be tried out and agreed, developing analysis plans and program code in parallel with the final data collection, data entry and checking. Purposeful EDA allows the subsequent stage of deriving the main findings to be relatively quick, uncontroversial, and well organised. DERIVING THE MAIN FINDINGS: the second stage will  ideally begin with a clear-cut clean version of the data, so that analysis files are consistent with one another, and any inconsistencies, e.g. in numbers included, can be clearly explained. This is the stage we amplify upon, later in this guide. It should generate the summary  © SSC 2001 – Approaches to the Analysis of Survey Data 9 findings, relationships, models, interpretations and narratives, and recommendations that research users will need to begin utilising the results. first Of course one needs to allow time for ‘extra’ but usually inevitable tasks such as:†¢ follow-up work to produce further more detailed findings, e.g. elucidating unexpected results from the pre-planned work. †¢ a change made to the data, each time a previously unsuspected recording or data entry error comes to light. Then it is important to correct the database and all analysis files already created that involve the value to be corrected. This will mean repeating analyses that have already been done using, but not revealing, the erroneous value. If that analysis was done â€Å"by mouse clicking† and with no record of the steps, this can be very tedious. This stage of work is best undertaken using software that can keep a log: it records the analyses in the form of program instructions that can readily and accurately be re-run. ARCHIVING means that data collectors keep, perhaps on CD, all the non-ephemeral material relating to their efforts to acquire information. Obvious components of such a record include:(i) data collection instruments, (ii) raw data, (iii) metadata recording the what, where, when, and other identifiers of all variables, (iv) variable names and their interpretations, and labels corresponding to values of categorical variables, (v) query programs used to extract analysis files from the database, (vi) log files  defining the analyses, and (vii) reports. Often georeferencing information, digital photographs of sites and scans of documentary material are also useful. Participatory village maps, for example, can be kept for reference as digital photographs. Surveys are often complicated endeavours where analysis covers only a fraction of what could be done. Reasons for developing a good management system, of which the archive is part, include:†¢ keeping the research process organised as it progresses; †¢ satisfying the sponsor’s (e.g. DFID’s) contractual requirement that data should be available if required by the funder or by legitimate successor researchers; †¢ permitting a detailed re-analysis to authenticate the findings if they are questioned; †¢ allowing a different breakdown of results e.g. when administrative boundaries are redefined; †¢ linking several studies together, for instance in longer-term analyses carrying baseline data through to impact assessment. 10  © SSC 2001 – Approaches to the Analysis of Survey Data 1.5 Population Description as the Major Objective In the next section we look at the objective of comparing results from sub-groups, but a more basic aim is to estimate a characteristic like the absolute number in a category of proposed beneficiaries, or a relative number such as the prevalence of HIV seropositives. The estimate may be needed to describe a whole population or sections of it. In the basic analyses discussed below, we need to bear in mind both the planned and the achieved sampling structure. Example: Suppose ‘before’ and ‘after’ surveys were each planned to have a 50:50 split of urban and rural respondents. Even if we achieved 50:50 splits, these would need some manipulation if we wanted to generalise the results to represent an actual population split of 70:30 urban:rural. Say we wanted to assess the change from ‘before’ to ‘after’ and the achieved samples were in fact split 55:45 and 45:55. We would have to correct the  results carefully to get a meaningful estimate of change. Samples are often stratified i.e. structured to capture and represent particular segments of the target population. This may be much more sophisticated than the urban/rural split in the previous paragraph. Within-stratum summaries serve to describe and characterise each of these parts individually. If required by the objectives, overall summaries, which put together the strata, need to describe and characterise the whole population. It may be fine to treat the sample as a whole and produce simple, unweighted summaries if (i) we have set out to sample the strata proportionately, (ii) we have achieved this, and (iii) there are no problems due to hierarchical structure. Nonproportionality arises from various quite distinct sources, in particular:†¢ Case A: often sampling is disproportionate across strata by design, e.g. the urban situation is more novel, complex, interesting or accessible, and gets greater coverage than the fraction of the population classed as rural. †¢ Case B : sometimes particular strata are bedevilled with high levels of nonresponse, so that the data are not proportionate to stratum sizes, even when the original plan was that they should be. If we ignore non-proportionality, a simple-minded summary over all cases is not a proper representation of the population in these instances.  The ‘mechanistic’ response to ‘correct’ both the above cases is (1) to produce withinstratum results (tables or whatever), (2) to scale the numbers in them to represent the true population fraction that each stratum comprises, and then (3) to combine the results.  © SSC 2001 – Approaches to the Analysis of Survey Data 11 There is often a problem with doing this in case B, where non-response is an important part of the disproportionality: the reasons why data are missing from particular strata often correspond to real differences in the behaviour of respondents, especially those omitted or under-sampled, e.g. â€Å"We had very good response rates everywhere except in the north. There a high proportion of the population are nomadic, and we largely failed to find them.† Just  scaling up data from settled northerners does not take account of the different lifestyle and livelihood of the missing nomads. If you have largely missed a complete category, it is honest to report partial results making it clear which categories are not covered and why. One common ‘sampling’ problem arises when a substantial part of the target population is unwilling or unable to cooperate, so that the results in effect only represent a limited subset – those who volunteer or agree to take part. Of course the results are biased towards e.g. those who command sufficient resources to afford the time, or e.g. those who habitually take it upon themselves to represent others. We would be suspicious of any study which appeared to have relied on volunteers, but did not look carefully at the limits this imposed on the generalisability of the conclusions. If you have a low response rate from one stratum, but are still prepared to argue that the data are somewhat representative, the situation is at the very least uncomfortable. Where you have disproportionately few responses, the multipliers used in scaling up to ‘represent’ the stratum will be very high, so your limited data will be heavily weighted in the final overall summary. If there is any possible argument that these results are untypical, it is worthwhile to think carefully before giving them extra prominence in this way. 1.6 Comparison as the Major Objective One sound reason for disproportionate sampling is that the main objective is a comparison of subgroups in the population. Even if one of two groups to be compared is very small, say 10% of the total number in the population, we now want roughly equally many observations from each subgroup, to describe both groups roughly equally accurately. There is no point in comparing a very accurate set of results from one group with a very vague, ill-defined description of the other; the comparison is at least as vague as the worse description. The same broad principle applies whether the comparison is a wholly quantitative one looking at the difference in means of a numerical measure between groups, or a much looser verbal comparison e.g. an assessment of differences in pattern across a range of cross-tabulations. 12  © SSC 2001 – Approaches to the Analysis of Survey Data If for a subsidiary objective we produce an overall summary giving ‘the general picture’ of which both groups are part, 50:50 sampling may need to be re-weighted 90:10 to produce a quantitative overall picture of the sampled population. The great difference between true experimental approaches and surveys is that experiments usually involve a relatively specific comparison as the major objective, while surveys much more often do not. Many surveys have multiple objectives, frequently ill defined, often contradictory, and usually not formally prioritised. Along with the likelihood of some non-response, this tends to mean there is no sampling scheme which is best for all parts of the analysis, so various different weighting schemes may be needed in the analysis of a single survey. 1.7 When Weighting Matters Several times in the above we have discussed issues about how survey results may need to be scaled or weighted to allow for, or ‘correct for’, inequalities in how the sample represents the population. Sometimes this is of great importance, sometimes not. A fair evaluation of survey work ought to consider whether an appropriate tradeoff has been achieved between the need for accuracy and the benefits of simplicity. If the objective is formal estimation, e.g. of total population size from a census of a sample of communities, we are concerned to produce a strictly numerical answer, which we would like to be as accurate as circumstances allow. We should then correct as best we can for a distorted representation of the population in the sample. If groups being formally compared run across several population strata, we should try to ensure the comparison is fair by similar corrections, so that the groups are compared on the basis of consistent samples. In these cases we have to face up to problems such as unusually large weights attached to poorly-responding strata, and we may need to investigate the extent to which the final answer is dubious because of sensitivity to results from such subsamples. Survey findings are often used in ‘less numerical’ ways, where it may not be so important to achieve accurate weighting e.g. â€Å"whatever varieties they grow for sale, a large majority of farm households in Sri Lanka prefer traditional red rice varieties for home consumption because they prefer their flavour†. If this is a clear-cut finding which accords with other information, if it is to be used for a simple decision process, or if it is an interim finding which will prompt further investigation, there is a lot to be said for keeping the analysis simple. Of course it saves time and money. It makes the process of interpretation of the findings more accessible to those not very involved in the study. Also, weighting schemes depend on good information to create the weighting factors and this may be hard to pin down.  © SSC 2001 – Approaches to the Analysis of Survey Data 13 Where we have worryingly large weights, attaching to small amounts of doubtful information, it is natural to want to put limits on, or ‘cap’, the high weights, even at the expense of introducing some bias, i.e. to prevent any part of the data having too much impact on the result. The ultimate form of capping is to express doubts about all the data, and to give equal weight to every observation. The rationale, not usually clearly stated, even if analysts are aware they have done this, is to minimise the maximum weight given to any data item. This lends some support to the common practice of analysing survey data as if they were a simple random sample from an unstructured population. For ‘less numerical’ usages, this may not be particularly problematic as far as simple description is concerned. Of course it is wrong – and may be very misleading – to follow this up by calculating standard deviations and making claims of accuracy about the results which their derivation will not sustain! 1.8 Coding We recognise that purely qualitative researchers may prefer to use qualitative analysis methods and software, but where open-form and other verbal responses occur alongside numerical data it is often sensible to use a quantitative tool. From the statistical viewpoint, basic coding implies that we have material, which can be put into nominal-level categories. Usually this is recorded in verbal or pictorial form, maybe on audio- or videotape, or written down by interviewers or self-reported. We would advocate computerising the raw data, so it is archived. The following refers to extracting codes, usually describing the routine comments, rather than unique individual ones which can be used for subsequent qualitative analysis. By scanning the set of responses, themes are developed which reflect the items noted in the material. These should reflect the objectives of the activity. It is not necessary to code rare, irrelevant or uninteresting material. In the code development phase, a large enough range of the responses is scanned to be reasonably sure that commonly occurring themes have been noted. If previous literature, or theory, suggests other themes, these are noted too. Ideally, each theme is broken down into unambiguous, mutually exclusive and exhaustive, categories so that any response segment can be assigned to just one, and assigned the corresponding code value. A ‘codebook’ is then prepared where the categories are listed and codes assigned to them. Codes do not have to be consecutive numbers. It is common to think of codes as presence/absence markers, but there is no intrinsic reason why they should not be graded as ordered categorical variables if appropriate, e.g. on a scale such as fervent, positive, uninterested/no opinion, negative. 14  © SSC 2001 – Approaches to the Analysis of Survey Data The entire body of material is then reviewed and codes are recorded. This may be in relevant places on questionnaires or transcripts. Especially when looking at ‘new’ material not used in code development, extra items may arise and need to be added to the codebook. This may mean another pass through material already reviewed, to add new codes e.g. because a  particular response is turning up more than expected. From the point of view of analysis, no particular significance attaches to particular numbers used as codes, but it is worth bearing in mind that statistical packages are usually excellent at sorting, selecting or flagging, for example, ‘numbers between 10 and 19’ and other arithmetically defined sets. If these all referred to a theme such as ‘forest exploitation activities of male farmers’ they could easily be bundled together. It is of course impossible to separate out items given the same code, so deciding the right level of coding detail is essential at an early stage in the process. When codes are analysed, they can be treated like other nominal or ordered categorical data. The frequencies of different types of response can be counted or cross-tabulated. Since they often derive from text passages and the like, they are often particularly well-adapted for use in sorting listings of verbal comments – into relevant bundles for detailed non-quantitative analysis. 1.9 Ranking Scoring A common means of eliciting data is to ask individuals or groups to rank a set of options. The researchers’ decision to use ranks in the first place means that results are less informative than scoring, especially if respondents are forced to choose between some nearly-equal alternatives and some very different ones. A British 8-year-old offered baked beans on toast, or fish and chips, or chicken burger, or sushi with hot radish might rank these 1, 2, 3, 4 but score them 9, 8.5, 8, and 0.5 on a zero to ten scale! Ranking is an easy task where the set of ranks is not required to contain more than about four or five choices. It is common to ask respondents to rank, say, their best four from a list of ten, with 1 = best, etc. Accepting a degree of arbitrariness, we would usually replace ranks 1, 2, 3, 4, and a string of blanks by pseudo-scores 4, 3, 2, 1, and a string of zeros, which gives a complete array of numbers we can summarise – rather than a sparse array where we don’t know how to handle the blanks. A project output paper†  available on the SSC website explores this in more detail. †  Converting Ranks to Scores for an ad hoc Assessment of Methods of Communication Available to Farmers by Savitri Abeyasekera, Julie  Lawson-Macdowell Ian Wilson. This is an output from DFID-funded work under the Farming Systems Integrated Pest Management Project, Malawi and DFID NRSP project R7033, Methodological Framework for Combining Qualitative and Quantitative Survey Methods.  © SSC 2001 – Approaches to the Analysis of Survey Data 15 Where the instructions were to rank as many as you wish from a fixed, long list, we would tend to replace the variable length lists of ranks with scores. One might develop these as if respondents each had a fixed amount, e.g. 100 beans, to allocate as they saw fit. If four were chosen these might be scored 40, 30, 20, 10, or with five chosen 30, 25, 20, 15, 10, with zeros again for unranked items. These scores are arbitrary e.g. 40, 30, 20, 10 could instead be any number of choices e.g. 34, 28, 22, 16 or 40, 25, 20, 15; this reflects the rather uninformative nature of rankings, and the difficulty of post hoc construction of information that was not elicited effectively in the first place. Having reflected and having replaced ranks by scores we would usually treat these like any other numerical data, with one change of emphasis. Where results might be sensitive to the actual values attributed to ranks, we would stress sensitivity analysis more than with other types of numerical data, e.g. re-running analyses with (4, 3, 2, 1, 0, 0, †¦) pseudo-scores replaced by (6, 4, 2, 1, 0, 0 , †¦). If the interpretations of results are insensitive to such changes, the choice of scores is not critical. 16  © SSC 2001 – Approaches to the Analysis of Survey Data 2. Doing the Analysis 2.1 Approaches Data listings are readily produced by database and many statistical packages. They are generally on a case-by-case basis, so are particularly suitable in  EDA as a means of tracking down odd values, or patterns, to be explored. For example, if material is in verbal form, such a listing can give exactly what every respondent was recorded as saying. Sorting these records – according to who collected them, say – may show up great differences in field workers’ aptitude, awareness or approach. Data listings can be an adjunct to tabulation: in Excel, for example, the Drill Down feature allows one to look at the data from individuals who appear together in a single cell. There is a place for the use of graphical methods, especially for presentational purposes, where simple messages need to be given in easily understood, and attentiongrabbing form. Packages offer many ways of making results bright and colourful, without necessarily conveying more information or a more accurate understanding. A few basic points are covered in the guide on Informative Presentation of Tables, Graphs and Statistics. Where the data are at all voluminous, it is a good idea selectively to tabulate most ‘qualitative’ but numerically coded data i.e. the binary, nominal or ordered categorical types mentioned above. Tables can be very effective in presentations if stripped down to focus on key findings, crisply presented. In longer reports, a carefully crafted, well documented, set of cross-tabulations is usually an essential component of summary and comparative analysis, because of the limitations of approaches which avoid tabulation:†¢ Large numbers of charts and pictures can become expensive, but also repetitive, confusing and difficult to use as a source of detailed information. †¢ With substantial data, a purely narrative full description will be so long-winded and repetitive that readers will have great difficulty getting a clear picture of what the results have to say. With a briefer verbal description, it is difficult not to be overly selective. Then the reader has to question why a great deal went into collecting data that merits little description, and should question the impartiality of the reporting. †¢ At the other extreme, some analysts will skip or skimp the tabulation stage and move rapidly to complex statistical modelling. Their findings are just as much to be distrusted! The models may be based on preconceptions rather than evidence, they may fit badly and conceal important variations in the underlying patterns.  © SSC 2001 – Approaches to the Analysis of Survey Data 17 †¢ In terms of producing final outputs, data listings seldom get more than a place in an appendix. They are usually too extensive to be assimilated by the busy reader, and are unsuitable for presentation purposes. 2.2 One-Way Tables The most straightforward form of analysis, and one that often supplies much of the basic information need, is to tabulate results, question by question, as ‘one-way tables’. Sometimes this can be done using an original questionnaire and writing on it the frequency or number of people who ‘ticked each box’. Of course this does not identify which respondents produced particular combinations of responses, but this is often a first step where a quick and/or simple summary is required. 2.3 Cross-Tabulation: Two-Way Higher-Way Tables At the most basic level, cross-tabulations break down the sample into two-way tables showing the response categories of one question as row headings, those of another question as column headings. If for example each question has five possible answers the table breaks the total sample down into 25 subgroups. If the answers are subdivided e.g. by sex of respondent, there will be one three-way table, 5x5x2, probably shown on the page as separate two-way tables for males and for females. The total sample size is now split over 50 categories and the degree to which the data can sensibly be disaggregated will be constrained by the total number of respondents represented. There are usually many possible two-way tables, and even more three-way tables. The main analysis needs to involve careful thought as to which ones are necessary, and how much detail is needed. Even after deciding that we want some cross-tabulation with categories of ‘question J’ as rows and ‘question K’ as columns, there are several other  decisions to be made: †¢ The number in the cells of the table may be just the frequency i.e. the number of respondents who gave that combination of answers. This may be rephrased as a proportion or a percentage of the total. Alternatively, percentages can be scaled so they total 100% across each row or down each column, so as to make particular comparisons clearer. †¢ The contents of a cell can equally well be a statistic derived from one or more other questions e.g. the proportion of the respondents falling in that cell who were economically-active women. Often such a table has an associated frequency table to show how many responses went in to each cell. If the cell frequencies represent 18  © SSC 2001 – Approaches to the Analysis of Survey Data small subsamples the results can vary wildly, just by chance, and should not be over-interpreted. †¢ Where interest focuses mainly on one ‘area’ of a two-way table it may be possible to combine rows and columns that we don’t need to separate out, e.g. ruling party supporters vs. supporters of all other parties. This simplifies interpretation and presentation, as well as reducing the impact of chance variations where there are very small cell counts. †¢ Frequently we don’t just want the cross-tabulation for ‘all respondents’. We may want to have the same table separately for each region of the country – described as segmentation – or for a particular group on whom we wish to focus such as ‘AIDS orphans’ – described as selection. †¢ Because of varying levels of success in covering a population, the response set may end up being very uneven in its coverage of the target population. Then simply combining over the respondents can mis-represent the intended population. It may be necessary to show the patterns in tables, sub-group by sub-group to convey the whole picture. An alternative, discussed in Part 1, is to weight up the results from the sub-groups to give a fair representation of the whole. 2.4 Tabulation the Assessment of Accuracy Tabulation is usually purely descriptive, with limited effort made to assess the ‘accuracy’ of the numbers tabulated. We caution that confidence intervals are sometimes very wide when survey samples have been disaggregated into various subgroups: if crucial decisions hang on a few numbers it may well be worth putting extra effort into assessing – and discussing – how reliable these are. If the uses intended for various tables are not very numerical or not very crucial, it is likely to cause unjustifiable delay and frustration to attempt to put formal measures of precision on the results. Usually, the most important considerations in assessing the ‘quality’ or ‘value’ or ‘accuracy’ of results are not those relating to ‘statistical sampling variation’, but those which appraise the following factors and their effects:†¢ evenness of coverage of the target (intended) population †¢ suitability of the sampling scheme reviewed in the light of field experience and findings †¢ sophistication and uniformity of response elicitation and accuracy of field recording †¢ efficacy of measures to prevent, compensate for, and understand non-response †¢ quality of data entry, cleaning and metadata recording †¢ selection of appropriate subgroups in analysis  © SSC 2001 – Approaches to the Analysis of Survey Data 19 If any of the above factors raises important concerns, it is necessary to think hard about the interpretation of ‘statistical’ measures of precision such as standard errors. A factor that has uneven effects will introduce biases, whose size and detectability ought to be dispassionately appraised and reported with the conclusions. Inferential statistical procedures can be used to guide generalisations from the sample to the population, where a  survey is not badly affected by any of the above. Inference addresses issues such as whether apparent patterns in the results have come about by chance or can reasonably be taken to reflect real features of the population. Basic ideas are reviewed in Understanding Significance: the Basic Ideas of Inferential Statistics. More advanced approaches are described in Modern Methods of Analysis. Inference is particularly valuable, for instance, in determining the appropriate form of presentation of survey results. Consider an adoption study, which examined socioeconomic factors affecting adoption of a new technology. Households are classified as male or female headed, and the level of education and access to credit of the head is recorded. At its most complicated the total number of households in the sample would be classified by adoption, gender of household head, level of education and access to credit resulting in a 4-way table. Now suppose, from chi-square tests we find no evidence of any relationship between adoption and education or access to credit. In this case the results of the simple twoway table of adoption by gender of household head would probably be appropriate. If on the other hand, access to credit were the main criterion affecting the chance of adoption and if this association varied according to the gender of the household head, the simple two-way table of adoption by gender would no longer be appropriate and a three-way table would be necessary. Inferential procedures thus help in deciding whether presentation of results should be in terms of one-way, two-way or higher dimensional tables. Chi-square tests are limited to examining association in two-way tables, so have to be used in a piecemeal fashion for more complicated situations like that above. A more general way to examine tabulated data is to use log-linear models described in Modern Methods of Analysis. 2.5 Multiple Response Data Surveys often contain questions where respondents can choose a number of relevant responses, e.g. 20  © SSC 2001 – Approaches to the Analysis of Survey Data If you are not using an improved fallow on any of your land, please tick from the list below, any reasons that apply to you:(i) Don’t have any land of my own (ii) Do not have any suitable crop for an improved fallow (iii) Can not afford to buy the seed or plants (iv) Do not have the time/labour There are three ways of computerising these data. The simplest is to provide as many columns as there are alternatives. This is called a multiple dichotomy†, because there is a yes/no (or 1/0) response in each case indicating that the respondent ticked/did not tick each item in the list. The second way is to find the maximum number of ticks from anyone and then have this number of columns, entering the codes for ticked responses, one per column. This is known as â€Å"multiple response† data. This is a useful method if the question asks respondents to put the alternatives in order of importance, because the first column can give the most important reason, and so on. A third method is to have a separate table for the data, with just 2 columns. The first identifies the person and the second gives their responses. There are as many rows of data as there are reasons. There is no entry for a  person who gives no reasons. Thus, in this third method the length of the columns is equal to the number of responses rather than the number of respondents. If there are follow-up questions about each reason, the third method above is the obvious way to organise the data, and readers may identify the general concept as being that of data at another level, i.e. the reason level. More information on organising this type of data is provided in the guide The Role of a Database Package for Research Projects. Essentially such data are analysed by building up counts of the numbers of mentions of each response. Apart from SPSS, few standard statistics packages have any special facilities for processing multiple response and multiple dichotomy data. Almost any package can be used with a little ingenuity, but working from first principles is a timeconsuming business. On our web site we describe how Excel may be used. 2.6 Profiles Usually the questions as put to respondents in a survey need to represent ‘atomic’ facets of an issue, expressed in concrete terms and simplified as much as possible, so that there is no ambiguity and so they will be consistently interpreted by respondents.  © SSC 2001 – Approaches to the Analysis of Survey Data 21 Basic cross-tabulations are based on reporting responses to such individual questions and are therefore narrowly issue-specific. A rather different approach is needed if the researchers’ ambitions include taking an overall view of individual, or small groups’, responses as to their livelihood, say. Cross-tabulations of individual questions are not a sensible approach to ‘people-centred’ or ‘holistic’ summary of results. Usually, even when tackling issues a great deal less complicated than livelihoods, the more important research outputs are ‘complex molecules’ which bring together  responses from numerous questions to produce higher-level conclusions described in more abstract terms. For example several questions may each enquire whether the respondent follows a particular recommendation, whereas the output may be concerned with overall ‘compliance’ – the abstract concept behind the questioning. A profile is a description synthesising responses to a range of questions, perhaps in terms of a set of abstract nouns like compliance. It may describe an individual, cluster of respondents or an entire population. One approach to discussing a larger concept is to produce numerous cross-tabulations reflecting actual questions and to synthesise their information content verbally. This tends to lose sight of the ‘profiling’ element: if particular groups of respondents tend to reply to a range of questions in a similar way, this overall grouping will often come out only weakly. If you try to follow the group of individuals who appear together in one corner cell of the first cross-tab, you can’t easily track whether they stay together in a cross-tab of other variables. Another type of approach may be more constructive: to derive synthetic variables – indicators – which bring together inputs from a range of questions, say into a measure of ‘compliance’, and to analyse those, by cross-tabulation or other methods. See section 2.8 below. If we have an analysis dataset with a row for each respondent and a column for each question, the derivation of a synthetic variable just corresponds to adding an extra column to the dataset. This is then used in analysis just like any other column. A profile for an individual will often comprise a set of values of a suite of indicators. 2.7 Looking for Respondent Groups Profiling is often concerned with acknowledging that respondents are not just a homogeneous mass, and distinguishing between different groups of respondents. Cluster analysis is a data-driven statistical technique that can draw out – and thence characterise – groups of respondents whose response profiles are similar to one another. The response profiles may serve to differentiate one group from another if they are somewhat distinct. This might be needed if the aim were, say, to define 22  © SSC 2001 – Approaches to the Analysis of Survey Data target groups for distinct safety net interventions. The analysis could help clarify the distinguishing features of the groups, their sizes, their distinctness or otherwise, and so on. Unfortunately there is no guarantee that groupings derived from data alone will make good sense in terms of profiling respondents. Cluster analysis does not characterise the groupings; you have to study each cluster to see what they have in common. Nor does it prove that they constitute suitable target groups for meaningful development interventions Cluster analysis is thus an exploratory technique, which may help to screen a large mass of data, and prompt more thoughtful analysis by raising questions such as:†¢ Is there any sign that the respondents do fall into clear-cut sub-groups? †¢ How many groups do there seem to be, and how important are their separations? †¢ If there are distinct groups, what sorts of responses do â€Å"typical† group members give? 2.8 Indicators Indicators are summary measures. Magazines provide many examples, e.g. an assessment of personal computers may give a score in numerical form like 7 out of 10 or a pictorial form of quality rating, e.g. Very good Good Moderate à  Poor Very Poor à ® This review of computers may give scores – indicators – for each of several characteristics, where the maximum score for each characteristic reflects its importance e.g. for one model:- build quality (7/10), screen quality (8/20), processor speed (18/30), hard disk capacity (17/20) and software provided (10/20). The maximum score over all characteristics in the summary indicator is in this case (10 + 20 + 30 + 20 + 20) = 100, so the total score for each computer is a percentage e.g. above (7 + 8 + 18 + 17 + 10) = 60%. The popularity of such summaries demonstrates that readers find them accessible, convenient and to a degree useful. This is either because there is little time to absorb detailed information, or because the indicators provide a baseline from which to weigh up the finer points. Many disciplines of course are awash with suggested indicators from simple averages to housing quality measures, social capital assessment tools, or quality-adjusted years of life. Of course new indicators should be developed only if others do nor exist or are unsatisfactory. Well-understood, well-validated indicators, relevant to the situation in hand are quicker and more cost-effective to use. Defining an economical set of meaningful indicators before data collection ought ideally to imply that at  © SSC 2001 – Approaches to the Analysis of Survey Data 23 analysis, their calculation follows a pre-defined path, and the values are readily interpreted and used. Is it legitimate to create new indicators after data collection and during analysis? This is to be expected in genuine ‘research’ where fieldwork approaches allow new ideas to come forward e.g. if new lines of questioning have been used, or if survey findings take the researchers into areas not  well covered by existing indicators. A study relatively early on in a research cycle, e.g. a baseline survey, can fall into this category. Usually this means the available time and data are not quite what one would desire in order to ensure well-understood, well-validated indicators emerge in final form from the analysis. Since the problem does arise, how does the analyst best face up to it? It is important not to create unnecessary confusion. An indicator should synthesise information and serve to represent a reasonable measure of some issue or concept. The concept should have an agreed name so that users can discuss it meaningfully e.g. ‘compliance’ or ‘vulnerability to flooding’. A specific meaning is attached to the name, so it is important to realise that the jargon thus created needs careful explanation to ‘outsiders’. Consultation or brainstorming leading to a consensus is often desirable when new indicators are created. Indicators created ‘on the fly’ by analysts as the work is rushed to a conclusion are prone to suffer from their hasty introduction, then to lead to misinterpretation, often over-interpretation, by enthusiast would-be users. It is all too easy for a little information about a small part of the issue to be taken as ‘the’ answer to ‘the problem’! As far as possible, creating indicators during analysis should follow the same lines as when the process is done a priori i.e. (i) deciding on the facets which need to be included to give a good feel for the concept, (ii) tying these to the questions or observations needed to measure these facets, (iii) ensuring balanced coverage, so that the right input comes from each facet, (iv) working out how to combine the information gathered into a synthesis which everyone agrees is sensible. These are all parts of ensuring face (or content) validity as in the next section. Usually this should be done in a simple enough way that the user community are all comfortable with the definitions of what is measured. There is some advantage in creating indicators when datasets are already available. You can look at how well the indicators serve to describe the relevant issues and groups, and select the most effective ones. Some analysts rely too much on data reduction techniques such as factor analysis or cluster analysis as a substitute for thinking hard about the issues. We argue that an intellectual process of indicator development should build on, or dispense with, more data-driven approaches. 24  © SSC 2001 – Approaches to the Analysis of Survey Data Principal component analysis is data-driven, but readily provides weighted averages. These should be seen as no more than a foundation for useful forms of indicator. 2.9 Validity The basic question behind the concept of validity is whether an indicator measures what we say or believe it does. This may be quite a basic question if the subject matter of the indicator is visible and readily understood, but the practicalities can be more complex in mundane, but sensitive, areas such as measurement of household income. Where we consider issues such as the value attached to indigenous knowledge the question can become very complex. Numerous variations on the validity theme are discussed extensively in social science research methodology literature. Validity takes us into issues of what different people understand words to mean, during the development of the indicator and its use. It is good practice to try a variety of approaches with a wide range of relevant people, and carefully compare the interpretations, behaviours and attitudes revealed, to make sure there are no major discrepancies of understanding. The processes of comparison and reflection, then the redevelopment of definitions, approaches and research instruments, may all be encompassed in what is sometimes called triangulation – using the results of different approaches to synthesise robust, clear, and easily interpreted results. Survey instrument or indicator validity is a discussion topic, not a statistical measure, but two themes with which statistical survey analysts regularly need to engage are the following. Content (or face) validity looks at the extent to which the questions in a survey, and the weights the results are given in a set of indicators, serve to cover in a balanced way the important facets of the notion the indicator is supposed to represent. Criterion validity can look at how the observed values of the indicator tie up with something readily  measurable that they should relate to. Its aim is to validate a new indicator by reference to something better established, e.g. to validate a prediction retrospectively against the actual outcome. If we measure an indicator of ‘intention to participate’ or ‘likelihood of participating’ beforehand, then for the same individuals later ascertain whether they did participate, we can check the accuracy of the stated intentions, and hence the degree of reliance that can in future be placed on the indicator. As a statistical exercise, criterion validation has to be done through sensible analyses of good-quality data. If the reason for developing the indicator is that there is no satisfactory way of establishing a criterion measure, criterion validity is not a sensible approach.  © SSC 2001 – Approaches to the Analysis of Survey Data 25 2.10 Summary In this guide we have outlined general features of survey analysis that have wide application to data collected from many sources and with a range of different objectives. Many readers of this guide should be able to use its suggestions unaided. We have pointed out ideas and methods which do not in any way depend on the analyst knowing modern or complicated statistical methods, or having access to specialised or expensive computing resources. The emphasis has been on the importance of preparing the appropriate tables to summarise the information. This is not to belittle the importance of graphical display, but that is at the presentation stage, and the tables provide the information for the graphs. Often key tables will be in the text, with larger, less important tables in Appendices. Often a pilot study will have indicated the most important tables to be produced initially. What then takes time is to decide on exactly the right tables. There are three main issues. The first is to decide on what is to be tabulated, and we have considered tables involving either individual questions or indicators. The second is the complexity of table that is  required – one-way, two-way or higher. The final issue is the numbers that will be presented. Often they will be percentages, but deciding on the most informative base, i.e. what is 100% is also important. 2.11 Next Steps We have mentioned the role of more sophisticated methods. Cluster analysis may be useful to indicate groups of respondents and principal components to identify datadriven indicators. Examples of both methods are in our Modern Methods of Analysis guide where we emphasise, as here, that their role is usually exploratory. When used, they should normally be at the start of the analysis, and are primarily to assist the researcher, rather than as presentations for the reader. Inferential methods are also described in the Modern Methods guide. For surveys, they cannot be as simple as in most courses on statistics, because the data are usually at multiple levels and with unequal numbers at each subdivision of the data. The most important methods are log-linear and logistic models and the newer multilevel modelling. These methods can support the analysts’ decisions on the complexity of tables to produce. Both the more complex methods and those in this guide are equally applicable to cross-sectional surveys, such as baseline studies, and longitudinal surveys. The latter are often needed for impact assessment. Details of the design and analysis of baseline surveys and those specifically for impact assessment must await another guide! 26  © SSC 2001 – Approaches to the Analysis of Survey Data  © SSC 2001 – Approaches to the Analysis of Survey Data 27 The Statistical Services Centre is attached to the Department of Applied Statistics at The University of Reading, UK, and undertakes training and consultancy work on a non-profit-making basis for clients outside the University. These statistical guides were originally written as part of a contract with DFID to give guidance to research and support staff working on DFID Natural Resources projects. The available titles are listed below. †¢ Statistical Guidelines for Natural Resources Projects †¢ On-Farm Trials – Some Biometric Guidelines †¢ Data Management Guidelines for Experimental Projects †¢ Guidelines for Planning Effective Surveys †¢ Project Data Archiving – Lessons from a Case Study †¢ Informative Presentation of Tables, Graphs and Statistics †¢ Concepts Underlying the Design of Experiments †¢ One Animal per Farm? †¢ Disciplined Use of Spreadsheets for Data Entry †¢ The Role of a Database Package for Research Projects †¢ Excel for Statistics: Tips and Warnings †¢ The Statistical Background to ANOVA †¢ Moving on from MSTAT (to Genstat) †¢ Some Basic Ideas of Sampling †¢ Modern Methods of Analysis †¢ Confidence Significance: Key Concepts of Inferential Statistics †¢ Modern Approaches to the Analysis of Experimental Data †¢ Approaches to the Analysis of Survey Data †¢ Mixed Models and Multilevel Data Structures in Agriculture The guides are available in both printed and computer-readable form. For copies or for further information about the SSC, please use the contact details given below. Statistical Services Centre, The University of Reading P.O. Box 240, Reading, RG6 6FN United Kingdom tel: SSC Administration +44 118 931 8025 fax: +44 118 975 3169 e-mail: [emailprotected] web: http://www.reading.ac.uk/ssc/