Tuesday, August 25, 2020

Free Essays on Violence In Media

I believe that the media have little to do with the expansion in brutality. An excessive number of individuals attempt to put the fault of their activities on some other person or thing. I feel that there are a few stages that ought to be taken to dispose of viciousness. Variables that are to be faulted for brutality are guardians jobs in their kids' lives and moral obligation. Results ought to be all the more viably completed. Options to viciousness ought to be empowered and rehearsed on a more prominent scope. I imagine that the way toward disheartening viciousness should start at home. Guardians should raise their kids with the qualities and ethics to act capably and assume the fault for their own activities. Guardians should act in a way than mirrors this hypothesis. A great many people can control their activities and cease from savage acts. On the off chance that guardians empower better conduct, at that point kids will do this conduct all through their grown-up lives. At the point when individuals see viciousness in the media, they are regularly indicated that there are no outcomes of brutality. Individuals know the distinction among the real world and dream. As a general rule, there are results, nonetheless, they are not generally as severe as they could be. Individuals ought to know about these outcomes, what's more, they should assume liability for their own activities. My next point is that on the off chance that outcomes were inforced progressively, at that point this would demoralize individuals from falling back on savagery. At the point when individuals see that others are being rebuffed for their negative activities they will see that wrongdoing doesn't pay. This will keep more individuals from perpetrating these violations and savage acts. Individuals are engaged by savagery. Media show what crowds are keen on. I imagine that there is no mischief in this. This is the point at which the idea of the real world and dream become included. Individuals realize that at the point when they are watching these projects they are phony or dream. Individuals watch these projects to get away from this present reality and retreat to a wo... Free Essays on Violence In Media Free Essays on Violence In Media I imagine that the media have little to do with the expansion in brutality. Such a large number of individuals attempt to put the fault of their activities on some other person or thing. I feel that there are a few stages that ought to be taken to take out brutality. Components that are to be faulted for brutality are guardians jobs in their kids' lives and moral obligation. Outcomes ought to be all the more viably completed. Options to brutality ought to be energized and polished on a more noteworthy scale. I imagine that the way toward debilitating savagery should start at home. Guardians should raise their kids with the qualities and ethics to act mindfully and assume the fault for their own activities. Guardians should act in a way than mirrors this hypothesis. A great many people can control their activities and cease from fierce acts. On the off chance that guardians support better conduct, at that point kids will do this conduct all through their grown-up lives. At the point when individuals see viciousness in the media, they are frequently indicated that there are no outcomes of viciousness. Individuals know the contrast among the real world and dream. In actuality, there are results, notwithstanding, they are not generally as severe as they could be. Individuals ought to know about these outcomes, also, they should assume liability for their own activities. My next point is that on the off chance that results were inforced increasingly, at that point this would demoralize individuals from falling back on brutality. At the point when individuals see that others are being rebuffed for their negative activities they will see that wrongdoing doesn't pay. This will keep more individuals from carrying out these violations and vicious acts. Individuals are engaged by brutality. Media show what crowds are keen on. I believe that there is no damage in this. This is the point at which the idea of the real world and dream become included. Individuals realize that at the point when they are watching these projects they are phony or dream. Individuals watch these projects to get away from this present reality and retreat to a wo...

Saturday, August 22, 2020

the use of fetal tissue in res essays

the utilization of fetal tissue in res articles The utilization of Fetal Tissue in Research and Transplants Fetal tissue investigate is done to give data to society that will in the long run lead to the avoidance of certain infections and ideally one day help to find a fix to some hopeless maladies. At this moment specialists are doing fetal tissue research to acquire data in zones of fetal tissue transplantation, the turn of events and creation of new immunizations, and in conclusion data on different birth imperfections and how to forestall or fix them. Fetal tissue investigate has added to the country's information on different maladies, new antibodies, and a potential treatment to help fix some hopeless sicknesses; be that as it may, numerous moral and legitimate concerns emerge. Every region of fetal research is done on three unique kinds of babies: the live, nonviable prematurely ended embryo, the baby in utero, and the dead hatchling. The main sort of research done is on the live, nonviable, prematurely ended embryo. This sort of research is done to consider the timeframe in which a hatchling can be kept alive after a premature birth so as to acquire cells or organs for transplant (Levy 44). A second sort of embryo that specialists probe is the baby in utero. This kind of research is finished by amniocentesis. Amniocentesis is the inclusion of a needle into the stomach divider and into the amniotic sac where it pulls back amniotic liquid to be tried. This kind of research gives no immediate inclusion the baby; be that as it may, it is risky in light of the fact that it has the chance of puncturing a delicate organ. The procedure of amniocentesis gives specialists and analysts data that recognizes sex-connected maladies and hereditary issue. Another way experimenters lead tests on the hatchling in utero is to give the mother explicit medications or treatment and afterward watch the consequences for the prematurely ended baby. Along these lines of inquiring about the baby in utero has direct contribution with the hatchling (Levy 44). The third sort of hatchling u... <!

Friday, August 7, 2020

Is Casual Dating Good for Relationships

Is Casual Dating Good for Relationships Relationships Print Is Casual Dating Good for Relationships? By Anabelle Bernard Fournier Updated on January 31, 2020 Westend61 / Getty Images More in Relationships Spouses & Partners LGBTQ Violence and Abuse Relationship scientists define casual dating as dating and sexual behavior outside of a long-term romantic relationship, and it is a common relationship among teenagers and young adults. In other words, casual dating is dating someone and possibly having sex with them when you are not engaged, married, or otherwise in a long-term commitment. Casual dating is not the same as hooking up, even though they have many things in common. Casual dating implies a desire to maintain a relationship, even though it is deemed casual. Hooking up, on the other hand, does not necessarily demand an emotional commitment on any level. Depending on your age and particular upbringing, you might consider casual dating a fun way to socialize, a stepping stone towards a more long-term relationship, or an immoral relationship because of its extramarital sex component (if sex is occurring). Many proponents of traditional marriage denounce casual dating as harmful and a precursor of divorce. Is it true that casual dating is harmful in the long term? Casual Dating and Divorce Relationship psychologists and sociologists have long believed that casual dating and cohabitation before marriage leads to higher divorce rates. However, the connection is difficult to establish on its own (there are lots of possible confounding factors), and there are many studies that show the opposite trend. How you ask questions and to whom you ask questions about casual dating deeply influence the type of results you get on this topic. If you ask happy couples in both casual and married relationships, they will both show similar patterns in satisfaction and happiness. The same goes for unhappy couples. In other words, evidence that shows couples as less happy and more likely to divorce could be a result of the specific couple and not the relationship style. Casual dating may or may not lead to more divorce rates in the future, depending on the person you are dating and the likelihood of a long-term relationship. Scientists cant quite agree. Are Casual Relationships Less Satisfactory? Another common effect attributed to casual dating is that these non-committed, casual relationships are less satisfying than more traditional, committed relationships. On the side of sexual satisfaction, a study published in the Canadian Journal of Human Sexuality found that although sexual satisfaction was higher for people in married, engaged, or exclusive relationships, there was still a positive link between casual dating and sexual satisfaction. Casual dating doesnt lead to unhappy sex life. What about general satisfaction with the relationship as a whole? The picture gets a little more complicated here. If you dont expect a future with the person you are dating, your relationship satisfaction will be lower than that of cohabiting, engaged or married relationships. If you do hope that your casual dating relationship will turn into something more long-term, then your satisfaction will be the same as that of cohabiting or married couples. It all depends on whether you feel the relationship is coming to an end or is in danger. Overall, if your expectations and attitudes towards casual dating are positive, its likely that youll be happy with your relationship and your sex life. Does Casual Dating Lead to Poor Mental Health? Some people also believe that casual dating leads to negative psychological effects such as low self-esteem, anxiety, and depression. Myths about the negative effects of casual dating and hooking up, especially for women, abound. What does the science say? On the topic of hooking up, research over one year with undergraduate students in the United States showed that only when people hooked up for non-autonomous (I didnt choose this) reasons had lower self-esteem, higher depression and anxiety, and more physical symptoms. In other words, when a person hooked up because of peer pressure, or because they couldnt consent (being under the influence of drugs or alcohol), it made them less happy. However, the participants who hooked up because they wanted to (autonomous) were just as happy as the students who didnt hook up at all. Whether hooking up and casual dating hurt people mentally depends on their own personal desires and attitudes towards these relationship styles. If you think that hooking up and casual dating is wrong, engaging in these things will make you feel bad. If you think that they are fun ways to meet people and explore future relationships, you will feel happy. It all depends on your point of view. Casual Dating in Relationship Progression If you dont think that casual dating is wrong or immoral, then you are likely to find this kind of relationship satisfying. More interestingly, researchers have begun considering casual dating as a step in a progressive relationship that eventually leads to long-term commitment or marriage. In a world where traditional marriage is retreating, people use casual dating as a way to test sexual and relationship compatibility with partners. In other words, casual dating tends to be an early step toward long-term partnerships. These relationships often begin with a meeting or even hooking up. The two people may start going on dates, perhaps not exclusively at first. If there is compatibility, people then tend to become exclusive, move in together, and eventually marry and have children. The difference between todays casual dating and the dating styles of previous generations is that now, casual dating more openly involves extramarital sex. This may be why older, more conservative groups tend to denounce casual dating as undesirable. However, since the non-marital, casual sex is widely accepted in modern societies, this attitude is less influential than it used to be. A Word From Verywell Casual dating will hurt you only if you are doing it against your will, if you have no hope for a future with the person, or if you think it is immoral. If you enjoy the sense of freedom that comes with developing relationships with a potential partner and testing the waters before making a commitment, casual dating is one step towards finding a person to possibly form a long-term commitment with in the future.

Saturday, May 23, 2020

Case Governance and Sustainability at Nike - 578 Words

POM 642/ 442 Case: Governance and Sustainability at Nike This case was about the struggles with sustainability with in Nike as well as the fashion industry. Greenpeace came out and slandered Nike, Adidas, Puma, and several other fashion designers for pollution resulting in the manufacturing process of their products. There are several chemicals used in the process of manufacturing clothing and shoes. Several chemicals are also used in the Dying process. Although these facility are 3rd parties and these locations are not used by Nike it has fallen on them along with their competitors to reduce pollution with in the manufacturing process. At Nike the key decision makers are Hannah Jones and Eric Sprunk who have to present their findings†¦show more content†¦This is dangerous for all of those to live and travel to these communities. Nike along with the other companies have a corporate responsibility that their manufacturing process is safe and not hazardous to unsuspecting citizens. Failure to make a change could harm people and ultimately result in massive lawsuits and large amounts of negative publicity which could reduce Nike and others profitability also affecting the stock holders. As a manager I would look to set up a partnership with the other manufacturers, pose it as a greater good for the world, companies can unite to bring about great change. I would want to work with these other companies, find out what they have come up with and work together to reach zero waste. Puma has agreed to the same goal, if they have not changed what they told Greenpeace they must be doing something right. Working together would reduce the duplication of efforts, it would help make the investment dollars go further and help make the world a clean place. As a manager I would also seek to work with Greenpeace, having them as a partner would help to provide new ideas and help to show the effort being put forth to reach this goal. Working with Greenpeace can only help in this situation. I would want to make every effort to try and reach that goal before going to Greenpeace stating that this goal was too ambitious because that would result in worldwide protests giving more negative publicity to theShow MoreRelatedNike : Sustainability And Labour Practises1217 Words   |  5 Pagesmay question the sustainability of an organisations. Sustainability of organisations, nations, humanity is becoming a primary goal of the individuals and groups in all the different sectors anyone can think of. Leaders are very keen to launch new sustainability initiatives on a daily basis in order to make the surroundings more sustainable in nature. So now in this report which is based on the case study of â€Å"Nike: sustainability and labour practises†. This case illustrates that Nike was found guiltyRead MoreCase Study : Managing Ethical Organization1377 Words   |  6 PagesSteptoe BUSI-472 Case Assignment 2 November 26, 2014 Case Review: Managing Ethical Missteps—Sweatshops to Leadership in Employment Practices †¢ Why did Nike fail to address corporate social responsibility early on? The Nike brand was created in 1972, and renamed to Nike in 1978, and has since grown to be the largest worldwide seller of athletic goods, with approximately 168 Nike stores in the United States and a presence in about 160 countries. (Ferrell, O.., 2003) During the time Nike brand was createdRead MoreThe Sustainability Revolution1666 Words   |  7 Pagesâ€Å"The sustainability revolution is nothing less than a rethinking and remaking of our role in the natural world. Revolution is far reaching and is having a profound impact shaping everything from the places we live and work to the foods we eat and the endeavors we pursue as individuals and as communities (Edwards, 2005).† This author views corporate social responsibility as the guiding light for the sustainability revolution. This revolution includes all the ways that substantiality and corporateRead MoreThe Pillars Of Good Corporate Governance Framework2439 Words   |  10 Pagesmeaningful stakeholder governance prog rammes’. Therein lies huge dependencies on certain stakeholders having such capacities. All that said, overall, including stakeholder governance as one of the pillars of good corporate governance framework can result in a more comprehensive understanding of corporate risk and opportunity, drive learning, innovation and performance while contributing to a strong reputation and prosperity over time. In fact, it is said that stakeholder governance has the potential toRead MoreEssay about Bus 4991941 Words   |  8 PagesNike Inc. BUS 499 Strayer University Identify the company’s mission, vision, and primary stakeholders. Nike was founded by Bill Bowerman and Phil Knight. The two men met when Bowerman was coaching track and field at the University of Oregon and Knight was a middle distance runner on his team. After earning an MBA from Standford, Knight returned to Oregon and approached Bowerman with an idea to bring in low priced, high-tech athletic shoes from Japan to compete in the United StatesRead MoreIntroduction Traditionally, financial reporting discloses only financial information to determine2600 Words   |  11 Pagesnon-financial impacts for example the impacts on the environment and community. Hence, Triple Bottom Line (TBL) which was first described in 1994 by John Elkington can be an ideal integrated approach that fit in to this approach in order to support the sustainability growth of the companies. Triple Bottom Line incorporate three dimension of performance and measurement namely social (people), environment (planet) and financial (profit) which attached to the theory of sustainable development reporting. It isRead MoreAuret van Heerden is the President of the Florida Labor Association1034 Words   |  4 Pagesto be conducted. Internal governance is important and inexpensive and when put in place and should be a win-win for everyone including the supplier. Nike has become one of those global companies targeted by a broad range of campaigning NGOs and journalists as a symbolic representation of the business in society. In Nike’s case, the issues are those of human rights and conditions for workers in factories in developing countries. In the face of constant accusations, Nike has developed a consideredRead MoreWhy Nike Kicks Butt in Sustainability3007 Words   |  13 Pagesfor instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouragedRead MoreThe Sustainability Strategy of Nike Company Essay2748 Words   |  11 PagesIntroduction and the objective of the study Nike is one of the biggest footwear and apparel manufacturing companies in the word. The company came into existence in 1964 by Bill Boweman and Phil Knight and named it as the Blue Ribbon Sports. The company changed the name to Nike, which is Greek word meaning victory, in 1972 after producing a good brand of shoes that became popular among the athletes (NIKE, Inc., 2001). Since then, the company has been successful, dominating the world market of athleticRead MoreMGMT 479 UNDER ARMOUR Powerpoint Group Essay1065 Words   |  5 Pagesfor more details ïÆ'‘ Board of Directors ïÆ'“ ïÆ'“ ïÆ'‘ Included eight members within the timeframe of the Case period Varied backgrounds, most serving since 2004 or earlier Top Management ïÆ'“ ïÆ'“ ïÆ'“ ïÆ'“ Eight executives made up UA’s top management team Founder Kevin Plank was President, CEO, and Chairman of the Board Divided into operational departments Very experienced in the industry Corporate Governance *See note section for more details ïÆ'‘ Natural Environment ïÆ'“ ïÆ'“ Rapid growth in performance based

Tuesday, May 12, 2020

Miami University Acceptance Rate, SAT/ACT Scores, GPA

Miami University is a public research university with an acceptance rate of 75%. Located in Oxford, Ohio and founded in 1909, Miami University is one of the oldest universities in the country. For its strengths in the liberal arts and sciences, Miami University was awarded a chapter of the prestigious  Phi Beta Kappa  Honor Society. The university also appears among the top Ohio colleges and the top Midwest colleges. In athletics, the Miami University RedHawks compete in the NCAA Division I  Mid-American Conference  (MAC). Considering applying to Miami University? Here are the admissions statistics you should know, including average SAT/ACT scores and GPAs of admitted students. Acceptance Rate During the 2017-18 admissions cycle, Miami University had an acceptance rate of 75%. This means that for every 100 students who applied, 75 were admitted, making Miami Universitys admissions process somewhat competitive. Admissions Statistics (2017-18) Number of Applicants 30,126 Percent Admitted 75% Percent Admitted Who Enrolled (Yield) 17% SAT Scores and Requirements Miami University requires that all applicants submit either SAT or ACT scores. During the 2017-18 admissions cycle, 28% of admitted student submitted SAT scores. SAT Range (Admitted Students) Section 25th Percentile 75th Percentile ERW 590 670 Math 610 710 ERW=Evidence-Based Reading and Writing This admissions data tells us that most of Miami Universitys admitted students fall within the top 20% nationally on the SAT. For the evidence-based reading and writing section, 50% of students admitted to Miami University scored between 590 and 670, while 25% scored below 590 and 25% scored above 670. On the math section, 50% of admitted students scored between 610 and 710, while 25% scored below 610 and 25% scored above 710. Applicants with a composite SAT score of 1380 or higher will have particularly competitive chances at Miami University. Requirements Miami University does not require the SAT writing section or SAT Subject tests. Note that Miami University participates in the scorechoice program, which means that the admissions office will consider your highest score from each individual section across all SAT test Dates. ACT Scores and Requirements Miami University requires that all applicants submit either SAT or ACT scores. During the 2017-18 admissions cycle, 82% of admitted students submitted ACT scores. ACT Range (Admitted Students) Section 25th Percentile 75th Percentile English 25 33 Math 25 30 Composite 26 31 This admissions data tells us that most of Miami Universitys admitted students fall within the top 18% nationally on the ACT. The middle 50% of students admitted to Miami University received a composite ACT score between 26 and 31, while 25% scored above 31 and 25% scored below 26. Requirements Miami University does not require the ACT writing section. Unlike many universities, Miami University superscores ACT results; your highest subscores from multiple ACT sittings will be considered. GPA In 2018, the middle 50% of Miami Universitys incoming class had high school GPAs between 3.59 and 4.18. 25% had a GPA above 4.18, and 25% had a GPA below 3.59. These results suggest that most successful applicants to Miami University have primarily A and B grades. Self-Reported GPA/SAT/ACT Graph Miami University Applicants Self-Reported GPA/SAT/ACT Graph. Data courtesy of Cappex. The admissions data in the graph is self-reported by applicants to Miami University. GPAs are unweighted. Find out how you compare to accepted students, see the real-time graph, and calculate your chances of getting in with a free Cappex account. Admissions Chances Although Miami University in Oxford, Ohio accepts three-quarters of applicants, most successful applicants have grades and test scores that are above average. However, Miami University has a holistic admissions process involving other factors beyond your grades and test scores. A strong application essay and glowing letters of recommendation can strengthen your application, as can participation in meaningful extracurricular activities, work experience, and a rigorous course schedule. At Miami University legacy status can also play a role in the admissions process. In the graph above, the blue and green dots represent accepted students. As you can see, students who got in tended to have high school averages of B or higher (A or A- is more common), ACT composite scores of 23 or higher, and SAT scores of 1100 or higher (ERWM). Higher test scores and grades obviously improve your chances of getting an acceptance letter, and almost all students with A averages and above average ACT scores were admitted. All admissions data has been sourced from the National Center for Education Statistics and Miami University Undergraduate Admissions Office.

Wednesday, May 6, 2020

Transcendentalism Free Essays

Jess Ms. K Accelerated English 10A 26 November 2012 Transcendentalism Final Paper Eras pass, cultural views die out, and society evolves. While this occurs, we still have transcendental views, which are from the mid 1800s, in society whether we realize it or not. We will write a custom essay sample on Transcendentalism or any similar topic only for you Order Now Transcendentalism is a group of ideas in literature and philosophy developed in the 1830s and 1840s. It protested against the general state of culture and society. The idea was that spiritual reality transcends the scientific and is knowledgeable through intuition. Transcendentalists were idealistic, optimistic, and believed people already had everything they needed in life. In our culture today transcendental views are still expressed through media like the song â€Å"Live Like We’re Dying† Kris Allen, the movie â€Å"My Sister’s Keeper,† and the song â€Å"You’re Only Human† by Billy Joel. â€Å"Live Like We’re Dying† by Kris Allen is one example of transcendentalism in media today. This song fits transcendentalism because it is all about living everyday like it is our last, so we are sure to live life to its fullest. Live Like We’re Dying† is rally for people to not waste a minute of their lives and to be productive. Allen writes about how people only have so much time in their lives because they don’t live forever; as humans, who have a limited amount of time, it is necessary to use every second of it without regrets. Allen sings, â€Å"So if your life flashed befo re you, what would you wish you would’ve done†¦ Looking at the hands of time we’ve been given here This is all we got and we gotta start thinkin’ it Every second counts on a clock that’s tickin’ Gotta live like we’re dying† (Allen). This supports Allen’s cry to society that we can all use our time wisely and live like we’re dying (Allen). In the short story â€Å"Rip Van Winkle† by Washington Irving the main character, Rip, avoids his life and ends up sleeping away twenty years. Irving writes, â€Å" Foe Rip Van Winkle was old and gray, and twenty summers had passed away – – Yes, twenty winters of snow and frost – Had he in his mountain slumber lost†¦Ã¢â‚¬  (Irving). This is showing what can happen when we waste our lives; we end up unfulfilled and unsatisfied. An aphorism, or truth about life, that was written by Ralph Waldo Emerson, â€Å"insist on yourself; never imitate† (Emerson) is a prime example of being ourselves and choosing what we want to do, so we will have fulfillment in our future. The movie, â€Å"My Sister’s Keeper† also had transcendental influences. In the movie, Anna is brought into the world to be a genetic match for her sister, Kate, who is suffering from acute promyelocytic leukemia, or cancer in the blood and bone marrow. When asked to donate on of her kidneys, Anna refuses and sues her parents for medical emancipation. Before the case is closed, we find our Kate asked Anna to sue their parents; also, Kate gets a burst of energy and trying to live her life while she can, goes to the beach with her family dies before the case is decided; we later learn Anna won. The film ends with the family at their new annual vacation, Montana, Kate’s favorite place in the world. In going to the beach Kate enjoys her life while she still can; she lived deliberately. In the short story â€Å"Walden† by Henry David Thoreau, Thoreau tires to live intentionally, to front only the essential facts of life, and to see if I could learn what it had to teach, and not, when I came to die, discover that I had not lived† (Thoreau). Both Kate and Thoreau wanted to live their life with their choices and spend time while they still have time to spend. A final piece of media influenced by transcendentalism is the song â€Å"You’re Only Human† by Billy Joel. This portrays transcendentalism because it says we are only human so there will be bumps in the road of our lives but to stay positive and optimistic. Similarly, the song represented both of their characteristics. In the short story â€Å"Walden†, Thoreau writes, â€Å"I have several more lives to live, and could not spare any more time for that one† (Thoreau). This is like the song because it is about moving forward no matter what is in the past like a, â€Å"second wind† (Joel). Also an aphorism by Emerson says, â€Å"life only avails, not having lived† (Emerson). This means that everyone should live their own live, for people are only truly happy when they make decisions with confidence rather than fear. These all relate because they describe living life rather than running from it. Live Like We’re Dying†, â€Å"My Sister’s Keeper†, and â€Å"You’re Only Human† all provide examples of transcendentalism in our media today. Some people say transcendentalists were the first hippies, they’re right. Transcendentalists were something of their own, much like hi ppies. Coming our of a time period with strict views on society, transcendentalists were extremely iconoclastic. They did not conform to ordinary rules. Transcendentalists like Thoreau, believed in civil disobedience, a concept foreign traditional people. These are all important ideas to carry over into our modern era, for hey help the individual in ways other things cannot. How to cite Transcendentalism, Essay examples

Friday, May 1, 2020

Investment Evaluation of Renewla or Replacement of A Machine

Question: Discuss about the Investment Evaluation of Renewla or Replacement of a Machine. Answer: Introduction Lifestyle Furniture is a leading online retailer of solid hardwood furniture. The company provides a large variety of ranges of furniture and wishes to improve its production line by investing in new generation craft machinery. The company is in a dilemma whether to renew the existing machinery or replace it with a new one. Relevant cash flows have been developed for both the options and the respective NPV, IRR and PI have been calculated to decide which option is better in terms of profitability and timely return of investment. An NPV profile has been developed for both the options to see which has a better NPV profile. The incremental cash flows have been later adjusted for the effect of inflation. A base case scenario has been performed for both the alternatives where the companys expected profits have been considered as the base case and depending on these profits, the sales and operating and maintenance costs have been adjusted to see the changes in NPV and other NPV techniques. Investment Evaluation Incremental Operating Cash Flows The incremental operating cash flows have been prepared for both the alternatives by deducting all operating outflows like raw material, overhead, operating costs, advertising and marketing expenses, depreciation and interest on bank loan. The net income has been calculated after deducting taxes. The operating cash flows have been calculated by adding depreciation to net income. The net cash flows have been discounted by the cost of capital. The initial investment for alternative 1 comprises of cost of renewal, opportunity cost of not selling the machine and increase in working capital. The initial investment for alternative 2 comprises of cost of new machine, after tax proceeds from sale of old machine and increase in working capital. There is a terminal cost at the end of the project which includes all the cash flows on liquidation. The table of incremental cash flows is given in the appendices. NPV, IRR and PI Alternative 1 Alternative 2 NPV $ 2,60,512 $ 6,32,774 IRR 54% 82% PI 2.5 3.5 Based on the above incremental cash flow analysis, the company should opt for alternative 2 as it has higher NPV. A project with positive NPV is acceptable and higher the NPV the better it is. Also if the IRR is more than the required rate of return, the project is acceptable. A profitability index of more than 1 is acceptable. Since both the projects have positive results but since alternative 2 has higher NPV, IRR and PI the company should opt for replacing the old machine with a new one. NPV and IRR profile NPV profile is the graphical representation of NPV at different required rates of return. The below graph represents the NPV profile for both the alternatives at given required rates of return: From the above table we see that alternative 2 has a better NPV at all levels of required rates of return. The alternative 2 NPV line lies above the 1st alternative throughout. Hence we can say that there is no conflict in ranking between NPV and IRR because at all levels of IRR, alternative 2 only has better NPV and hence is preferred. Impact of Inflation Due to an inflation of 3.5% p.a, there is an increase in the operating and maintenance costs, advertising and marketing costs and overhead costs. Due to an increase in the above costs, there would be change in incremental operating net cash flows for both the alternatives. The cash flows will reduce. Due to a decrease in net operating cash flows, the NPV, IRR and PI will decrease for both the alternatives. The new operating incremental cash flows and capital budgeting techniques are given in appendices. The change in NPV, IRR and PI is presented in the table below: Alternative 1 Alternative 2 Before Inflation After Inflation Before Inflation After Inflation NPV $ 2,60,512 $ 2,22,359 $ 6,32,774 $ 5,96,510 IRR 54% 48% 82% 78% PI 2.5 2.3 3.5 3.3 Base Case Scenario The base case here would be the expected profits of the company for the next five years which is $130000 in the first year and a 2.9% increase every year. The base case scenario does not apply to alternative 1 because the total net income for the project is $627,842 whereas the total expected profits are $688,809. Since the base case profits are more than project 1 profits, hence the scenario does not apply. For alternative 2, the sales and operating and maintenance costs have been altered to make the project just feasible for the company. The changes are presented in the table below: Sales Operating and maintenance costs % change Decrease by 18.24% Increase by 167% Total Net Profit $688797 $688815 NPV $283282 $269184 IRR 47% 44% PI 2.1 2.1 From the above table we see that sales were reduced by 18% to bring the project to being just feasible. This reduced the NPV, IRR and PI of the project. Even though the above items have reduced but the project is still feasible to the company because the NPV is positive, IRR is more than required rate of return and PI is more than 1. There is a huge change in operating and maintenance costs. The costs were increased by 167% to make the project just feasible for the company. This means that this cost is low as compared to other costs. Leasing Decision Lease can be either Operating lease or a Financial Lease. Under operational lease, the ownership of the asset is never transferred to the lessee whereas under financial lease the ownership of the asset is transferred at the end of lease term. Leasing is advantageous in term of tax savings. Lease rental is tax deductible, so is the depreciation. Since ownership rights are not available with the company, it is not advisable to go for lease as for tax savings depreciation is available and purchase of asset increases the companys assets. The operating lease is for a short term and hence for a project 5 years if operation, the lease would have to be renewed which is expensive and time consuming. Operating lease is considered an expense for the company whereas finance lease is considered a liability. An increased liability affects the credibility of the company in terms of acquiring a loan; hence the same is not advisable. Moreover, Lifestyle furniture is a profit making company and hence should use its earnings for purchase of asset rather than opting for a lease. WACC The weighted average costs of capital have been calculated using the book value and market value weights. The WACC is presented below: Book Value Market Value Source of Capital costs Book Value Weights Cost of capital Market Value Weights Cost of capital Long term debt 6% 4000000 0.784 0.032 3840000 0.55 0.02 preference share capital 13% 40000 0.007 0.001 60000 0.008 0.001 ordinary share equity 17% 1060000 0.207 0.035 3000000 0.434 0.073 Total 5100000 6.9% 6900000 9.8% The WACC is higher for market value weights of the different source of capital. This is because the cost of equity is the highest and the market value weights of equity is higher as compared to book value weight, thus a high weighted cost of equity has led to an overall increase in WACC. The preference share capital hardly has any effect on the WACC. Conclusion/ Recommendation On the basis of the above analysis, it is clear that the company should opt for alternative 2 of replacing the old machine with the new one. This is because the proposal has a higher NPV, IRR and PI. Also one of the main reasons is that alternative 1 of renewing the existing machine does not satisfy the base case scenario of the company of the minimum profits the company is expecting in the next five years. The total profits are lower than the expected profits. Whereas alternative 2 will be able to generate more than the total expected profits of the firm. Hence, the company should go ahead with replacing the old machine with a new one. Appendices Incremental Cash Flows NPV and IRR Profile Impact of Inflation Base case Scenario

Sunday, March 22, 2020

Why I want to be a Nurse free essay sample

Choosing the career of my future was one of the toughest decisions I had to make. During my underclassmen years, while many of my peers had already chosen the career path they wanted to pursue, I still was struggling to make a decision. By the end of junior year, with no clear career goal in mind, I was really starting to panic. It was not until I started watching a certain youtubers vlog that I became interested in nursing. This youtuber had recently been diagnosed with cancer and in his vlogs he talked about how the staff at the hospital, especially the nurses, was extremely caring and helped him through that difficult time in his life. There were several times when he would get emotional thanking the nurses for all they had done. I realized how I wanted to be able to help someone in the way the nurses had helped him. We will write a custom essay sample on Why I want to be a Nurse or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page I have always dreamed of a career which I would truly enjoy for the rest of my life and I believe nursing is that career. Nursing, as a profession would be a personally rewarding job in many aspects and also set me up for a successful career for my future. I want to be a nurse because I enjoy being around people in their times of need and I get internal satisfaction by serving those that need help. I believe that if I fully devote my life to achieving this goal I can become a wonderful addition to the medical field and make a difference. Nursing to me is more than a profession it is an art. I believe that caring is the essence of nursing, and that it goes beyond just science. Being a nurse means helping patients find deeper meanings to their illnesses and suffering so that they are able to understand themselves better and are able to heal. Thus, I believe that if I am to become a nurse I have to be consciously engaged in caring for the patients if I am to connect and establish relationships that work to promote my patient’s health and healing. I look forward to forming these relationships .

Thursday, March 5, 2020

The Role of Supply Chain Risk Management in Ensuring Smooth Functioning

The Role of Supply Chain Risk Management in Ensuring Smooth Functioning Introduction A number of issues such as changes in product design, branding and employee are likely to cause risks that threaten supply chain. Risk management in this field is a fundamental activity at the executive level in most organizations.Advertising We will write a custom essay sample on The Role of Supply Chain Risk Management in Ensuring Smooth Functioning specifically for you for only $16.05 $11/page Learn More Several studies have developed various models and theories that attempt to provide an explanation of the need for supply chain management in reducing risks. The purpose of this paper is to review and analyze some research articles from various authors with an aim of exploring theories and models developed over the last few decades. Model and theory analysis: Review of research Lin and Zhou (2011) carried out a study to address the impact that product design changes have on supply chain, with a special focus on the risks involved. Using a case study, the researchers developed some concepts that attempt to explain how supply chain runs under risks when the product design changes significantly. The theory attempts to show that such changes predispose an organization to a number of risks in supply, policy and delivery. Moreover, this theory suggests that change in product design leads to an array of risks at the internal level, which in this case involves research and design, production, planning, organization and information. This theory seems to indicate that any change in the product design that may have a significant impact on the customer and retailers perception of a product is likely to expose the supply chain system to these risks, which means that both customer-requested and company-initiated change in product design exposes the supply chain system to these risks. Using an in-depth longitudinal case study, Khan, Christopher and Burnes (2012 examined the impact of product design on the supply chain risk. The case st udy was based on a major cloth retailer in the United Kingdom. The researchers aimed to address the questions associated with the increasingly important issues of the impact of product design on the risks involved in supply chain management. The case study leads to a concept that was used to explain the impact of product design in the supply chain and the associated risks. The researchers theorize that risk management in supply chain is heavily dependent on the product design, where recognition of a design is a creative function of managing risks. In addition, the theory attempts to show that recognizing product design is a fundamental platform, on which risks are managed with ease and effectiveness.Advertising Looking for essay on business economics? Let's see if we can help you! Get your first paper with 15% OFF Learn More This concept attempts to show that recognition of product design must be one of the major activities as well as requirements for risk m anagement at the executive level. In addition, this theory seems to have closer association with the theory developed by Lin and Zhou (2011), as both of them emphasize on the need to consider product design as one aspect or factor that may lead to risks as well as effective management of risks in the supply chain. Christopher and Peck have attempted to present a good analysis on how supply chain management can effectively control risks by building a resilient supply chain system. Although the research is an analysis of findings rather than an empirical study, it provides the reader with some important theories that attempt to explain how product design impacts the supply chain in modern organizations. An important model developed in this article is the argument that building a resilient supply chain depends on a number of features that can be engineered into the supply chain to improve resilience. Among these factors is the need to focus on product design, which should be incorporat ed into the general designing for the supply chains in organizations. In addition, the concept argues that product design plays a major role in the process of understanding the supply chain and its structure. According to the article, products are the major aspect of a supply process, which means that their nature and impact on the whole process are fundamental. Therefore, failure to focus on product design when engineering supply chains is likely to involve risks that threaten the integrity and effectiveness of a supply chain management in a globalized business environment. This theory is important in providing some background information on the need to focus on product design when managing and engineering supply chains. In fact, it shows the important role that product design plays in mitigating risks in supply chains. The article by Chopra and Sodhi (2005) is based on real-life examples of how risk management in supply chain poses threats to the business process in a modern busin ess environment. The authors used two cases in which an electricity supply plant operated by Royal Philips Electronics in Albuquerque, New Mexico, was hit by a lightning in 2000. The massive surge the impact of the lightning caused at the grid started a fire that completely destroyed the plant’s microchips. Nokia Corporation was one of the major customers of the Royal Philips at the time. The impact of the lightning caused a massive reduction in the number of microchips at the Royal Philips stock, which made corporations like Nokia and others find it difficult to deal with the company.Advertising We will write a custom essay sample on The Role of Supply Chain Risk Management in Ensuring Smooth Functioning specifically for you for only $16.05 $11/page Learn More However, it is worth noting that Nokia Corporation had more than one supplier in its supply chain strategy, which made it possible to switch from ordering chips from the Royal Philips on a tem porary base. This proved effective in risk mitigation. On the other hand, the authors compared this situation with the impact of the problem at Telefon AB Ericsson Corporation, another major customer of the Royal Philips Corporation. This company had a single provider of chips in its supply chain. The company suffered from the problem, which led to a complete shut down of operations until the Royal Philips resumed production. From the two examples, the authors attempt to develop a theory, in which they explain the need for an effective design in the supply chain system. In this theory, the authors argue that corporations that use multiple designs for a product that is either sold or outsourced are likely to mitigate the risks involved when one product line or design is affected by any problem that may arise. In other words, this theory hypothesizes that multiple supply chain designs are more effective in risk management and mitigation than single-design supply chains. Supply chain m anagement in the process of supplying and delivering high risk products such as oil and gas proves to be one of the most crucial aspects of managers in these corporations. In fact, high risk but, at the same time, high profitable oil and gas production and supply provide a good example of how risk management in the supply chain can be enhanced with product design. Vosooghi, Fazli and Mavi (2012) used this example to develop additional concept of product design and its impact on supply chain management in the oil supply industry. The researchers use the fuzzy analytical hierarch process (FAHP) to weigh and analyze the risks related to crude oil supply chain. The study, carried in Iran, developed some theory that can be used to explain how risks can effectively be managed and mitigated in crude oil supply process. This model argues that regulation and environmental risks as well as cooperation polices can be viewed from a design perspective. Although the theory does not deal with prod uct design, the conclusions made in the article indicate that the way, in which the product is designed, usually influences the effectiveness of risk management in the supply chain.Advertising Looking for essay on business economics? Let's see if we can help you! Get your first paper with 15% OFF Learn More The aerospace industry is another high-risk field that requires attention when studying risk management and product design. Sinha, Whitman and Malzahn (2004) developed a study, in which they aimed at explaining how risk management can be effected in aerospace industry. The researchers argue that most of the supply chain systems involve a single supplier, which is likely to increase risks involved. Therefore, risk management proves to be an important area of management in companies that adopt this system. The researchers’ aim was to develop a model that can mitigate risks in supply chains adopted by aerospace companies. The results of the study have provided some models that can be used to mitigate risks in these companies. According to the study, IDEF0 concept is a model that mitigates risks in the aerospace supply chains. The model has five stages: risk identification, risk assessment, planning, and failure analysis and continuous improvement. Although this model focuses on a number of issues, it is worth noting that the design of the products in the supply chain system is a fundamental aspect of the model. Tang (2007) published an article that explains the risks involved in supply chains and how they can be managed with effectiveness even during crisis. The researcher develops a model that attempts to show how supply chain managers can enhance the supply chain to navigate through major disruptions whenever they occur. According to this model, inherent fluctuations are the first aspect that increases the risks, which implies that they should be the first issues to address in management. Secondly, the model indicates that corporations must design and reengineer their supply chain systems to enhance resilience and the ability to withstand the impacts of major disruptions. In addition, the model indicates that enhancing residence is strongly linked to the process of retaining apprehensive customers. Goh, Lim and Meng (2007) developed a study, in which the y attempted to develop a model for enhancing risks management in globalized supply chain networks. The study, carried out in Singapore, provided a scholastic model that indicates the need for risk management in supply chain to reduce the threats involved in globalized organizations.The model, known as multi-stage global supply chain network, incorporates a number of supply chain aspects in a globalized business system. For instance, it considers a new focus on related risks such as supply, demand, disruption and exchange as the most important areas of focus in managing risks. In addition, the model provides a new solution methodology that makes use of Moreau-Yosida regulation, design and logarithm that enhances the process of risk management and mitigation in diagnosing risk associated problems in globalized and multi-staged networks. Although this model is difficult to implement, it is highly effective in managing and mitigating risks in supply chains. Sheffi (2001) takes a differe nt approach to developing a model for risk management and mitigation in supply chains. In this article, Sheffi (2001) takes an example of risks posed by terrorism as a major threat to modern supply chain systems. The author analyzes the companies that were affected by the 9/11 terrorist attacks in the US. Using several examples, the researcher develops a model for explaining the importance of supply chain management in risk mitigation. This model focuses on two issues. First, it argues that corporations must adopt strategies to set certain operational redundancies in order to enhance their preparedness for risks. Secondly, it focuses on reduction of reliability on lead time and certain demand scenarios. The model suggests that private public partnership is the best way, trough which companies can organize themselves into networks that will enhance risk management and mitigation. In 2007, American corporations IBM, KPMG and ACE sponsored a study with an aim of revealing the best prac tice for managing risks in supply chains. The researching institute, the Economic Intelligence Unit, developed a comprehensive study and a report that show their hypothesized model for managing supply chain risks in the modern concept. In its simplest form, the model suggests that risk management is a discipline that has moved from loss avoidance to assume a new position as the key contributor to market advantage. According to the model, this is achieved through improved corporate reputation and better stand among the companies with the role of oversight such as rating industries. In addition, the model hypothesizes that risk management in supply chains has become an area that needs both technology and workmanship, because ideas must be generated, devised and implemented. In addition, it indicates that technology is an additional source of risk for supply chains, especially at a time when supply of products has gone virtual thanks to the internet technology. However, the author†™s model does not imply that technology should be avoided. Rather, it suggests that technology and workmanship should be integrated to provide the best method for mitigating and managing risks in supply chains. Conclusion From this analysis, a number of aspects should be noted. For instance, the models developed over the years to enhance risk management in supply chains tend to focus on the product, product delivery and internal aspects of the management. They incorporate the ideas of understanding the risks, developing prior knowledge of the risks, ensuring everyone is involved, company-company or company-public sector relations and the use of technology. Although the models are different, most of them attempt to show that supply chain is one of the areas of corporate management that runs under high risks due to the link between the company and other parties in its supply chain system. Therefore, changes in product design, branding and employees are likely to cause risks that thr eaten supply chain. Risk management in this field is a fundamental activity at the executive level in most organizations. These models/theories have attempted to provide an explanation of the need for supply chain management in reducing risks. References Chopra, S Sodhi, M, 2005, â€Å"Managing Risk To Avoid Supply-Chain Breakdown†, MIT Sloan Management Review, vol. 3, no. 1, pp. 53-64. Christopher, M Peck, H, 2004, â€Å"Building the resilient supply chain†, International Journal of Logistics Management, vol. 15, no. 2, pp. 1-13, Economist Intelligence Unit, 2007, Best practice in risk management: A function comes of age, Economist Intelligence Unit, New York Goh, M, Limb, J Meng, 2007, â€Å"A stochastic model for risk management in global supply chain networks†, European Journal of Operational Research, vol. 182, no. 1, pp. 164–173 Khan, O, Christopher, M Burnes, B, 2012, â€Å"The impact of product design on supply chain risk: a case study†, International Journal of Physical Distribution Logistics Management, vol. 38 no. 5, pp. 412-432 Lin, Y Zhou, L, 2011, â€Å"The impacts of product design changes on supply chain risk: a case study†, International Journal of Physical Distribution Logistics Management, vol. 41, no. 2, pp. 162-186 Sheffi, Y, 2001, Supply Chain Management under the Threat of International Terrorism, International Journal of Logistics Management, vol. 12, no. 2, pp. 1 – 11 Sinha, PR, Whitman, LE Malzahn, D, 2004, â€Å"Methodology to mitigate supplier risk in an aerospace supply chain†, Supply chain management: An international journal, vol.9, no. 2, pp. 154-168. Tang, C, 2007, â€Å"Robust strategies for mitigating supply chain disruptions†, International Journal of Logistics Research and Applications: A Leading Journal of Supply Chain Management, vol. 9, no.1, pp. 34-56. Vosooghi, M, Fazli, S Mavi, R, 2012, â€Å"Crude Oil Supply Chain Risk Management with Fuzzy Analyt ic Hierarchy Process†, American Journal of Scientific Research, vol. 12, no. 46, pp. 34-42

Tuesday, February 18, 2020

Stop Online Piracy Act (SOPA) Research Paper Example | Topics and Well Written Essays - 1250 words

Stop Online Piracy Act (SOPA) - Research Paper Example Requirements consist of the applicant of court orders to ban advertising systems and payment services from carrying out business with the offending websites and search engines links to sites, and court orders demanding suppliers on the Internet to disallow accessing sites. The bill would expand the criminal law to consist of the unauthorized transmission of copyrighted material, setting a maximum penalty of 5 years imprisonment2. The bill has grave implications for the current structure of the Internet in every sense as it allows the Justice Department and the owners of intellectual property, obtain court orders against those sites or services that allow or facilitate the infringement of alleged copyright, including: Block by ISPs to the website or service in question, including hosting, and even DNS level (although this has been a discussion). Facilitate collection companies on the internet (like PayPal) to freeze funds and restrict the use of the service. Block the sites that provi de advertising services. For example, Google Adsense cannot offer web service complained if this law was to be approved. It should remove links to the web or service reported. The bill declares a criminal offense to unauthorized radio broadcast or other distribution of copyrighted content with the punishing of the guilty with the maximum penalty of imprisonment for a term of 5 years. ... Under the bill, any member of a network on the Internet – from service providers, search engines and even the advertisers – in fact, require treatment for any owner to stop providing services to the resource, accused of piracy, and to stop any interaction with them (for example, close the channel, pay for content, suspend the ad contract, to limit the effect of the payment system, delete the site from Google to remove links to the site, completely block a site to visit, to prohibit the payment systems (such as PayPal, Visa, etc.) to make payments in favor of services and so on); otherwise, any of the direct and indirect accused of counterparties site will be regarded as his partner. Under the eyes of this new law, merely to get on your wall a picture, document or video that has a copyright will be considered a crime. This bill was welcomed by the entertainment industry in the United States and members of both parties. The funny thing is that this kind of activity is man y times promoted by the entertainment industry itself since they are the same fans who virally promote their favorite artists through social networks. Supporters of the bill argue that it is necessary to maintain profit organizations, employment and intellectual property protection in their respective industries (movies, music, software, etc.), especially states that it will help deal effectively with services outside the U.S. outside their jurisdiction. According to them it provides a protection to the market for intellectual property, employment and income, and need to strengthen enforcement of copyright laws, especially against foreign Web sites. Alleging defects in existing laws do not include foreign sites owned and functioned, and cited examples of "active promotion of the websites

Monday, February 3, 2020

Managed care contracts Essay Example | Topics and Well Written Essays - 500 words

Managed care contracts - Essay Example Managed care is sometimes used as a general term for the activity of organizing doctors, hospitals, and other providers into groups in order to enhance the quality and cost-effectiveness of health care. Managed Care Organizations (MCO) include HMO, PPO, POS, EPO, PHO, IDS, AHP, IPA, etc. Usually when one speaks of a managed care organization, one is speaking of the entity that manages risk, contracts with providers, is paid by employers or patient groups, or handles claims processing. Managed care has effectively formed a "go-between", brokerage or 3rd party arrangement by existing as the gatekeeper between payers and providers and patients. The term managed care is often misunderstood, as it refers to numerous aspects of healthcare management, payment and organization. It is best to ask the speaker to clarify what he or she means when using the term "managed care". In the purest sense, all people working in healthcare and medical insurance can be thought of as "managing care." Any s ystem of health payment or delivery arrangements where the plan attempts to control or coordinate use of health services by its enrolled members in order to contain health expenditures, improve quality, or both. Arrangements often involve a defined delivery system of providers with some form of contractual arrangement with the plan. See Health Maintenance Organization, Independent Practice Association, Preferred Provider Organization (Pohley 2008). Systems and techniques used to control the use of health care services. Includes a review of medical necessity, incentives to use certain providers, and case management. The body of clinical, financial and organizational activities designed to ensure the provision of appropriate health care services in a cost-efficient manner. Managed care techniques are most often practiced by organizations and professionals that assume risk for a

Sunday, January 26, 2020

Polymerase Chain Reaction (PCR) Steps

Polymerase Chain Reaction (PCR) Steps We owe the discovery of the polymerase chain reaction to Kary B Mullis in the year 1983. He was the actual proponent of PCR. Few people are aware that in 1971, Kleppe and the Nobel laureate Gobind Khorana published studies including a description of techniques that are now known to be the basis for nucleic acid replication. However, it is unfortunate that Kleppe and Khorana were ahead of their times. Oligonucleotide synthesis wasnt as simple as it is today; genes had not been sequenced and the idea of thermostable DNA polymerases had not been described. Hence, the credit for discovering the PCR remains with Kary Mullis. The Polymerase Chain Reaction is essentially a cell-free method of DNA and RNA cloning. The DNA or RNA is isolated from the cell and replicated upto a million times. At the end, what you get is a greatly amplified fragment of DNA. The PCR is quick, reliable and sensitive and its variations have made it the basis of genetic testing. WHAT KARY B MULLIS SAYS ABOUT HOW HE DISCOVERED THE POLYMERASE CHAIN REACTION I was just driving and thinking about ideas and suddenly I saw it. I saw the polymerase chain reaction as clear as if it were up on a blackboard inside my head, so I pulled over and started scribbling. A chemist friend of his was asleep in the car. Mullis says that Jennifer objected groggily to the delay and the light, but I exclaimed I had discovered something fantastic. Unimpressed, she went back to sleep. Mullis kept scribbling calculations, right there in the car. He convinced the small California biotech company, Cetus, he was working for at that time, that he was up to something big. They finally listened. They sold the patent of PCR to Hoffman-LaRoche for a staggering $300 million the maximum amount of money ever paid for a patent. Mullis meanwhile received a $10,000 bonus. BASIS OF THE METHOD The purpose of PCR is to generate a huge number of copies of a segment of DNA, which could be a gene, a portion of a gene, or an intronic region. There are three major steps in a PCR, which are repeated for 30 or 40 cycles. This is done on an automated cycler, which can either heat or cool the tubes containing the reaction mixture, as required, in a very short period of time. There are three major steps in a PCR, which are repeated for 30 or 40 cycles. DenaturationDuring this process, the double stranded DNA melts and opens to form single stranded DNA. All enzymatic reactions, such as those carried over from a previous cycle, stop. This will be explained in the next paragraph. The temperature for denaturation is not fixed but it usually occurs at about 95 °C. It is important to realize that the denaturation temperature is largely dependent on G:C (guanine:cytosine) content of the DNA fragment to be analyzed. This is reasonable when one considers that the G:C bond is a triple hydrogen bond and the AT bond is a double bond. Logic dictates that a triple bond should be 1.5 times harder to break than a double bond. Therefore, when the segment of DNA to be analyzed has a very high G:C content, the denaturation temperature can reach even upto 99 °C. AnnealingThis requires temperatures lower than those required for denaturation. In this process, the primers anneal to that very specific segment of DNA that is to be amplified. The primers are jiggling around, caused by the Brownian motion. Ionic bonds are constantly formed and broken between the single stranded primer and the single stranded template. The more stable bonds last a little bit longer (primers that fit exactly) and on that little piece of what is now double stranded DNA (template and primer); the polymerase can attach and starts copying the template. Once there are a few bases built in, the ionic bond is so strong between the template and the primer, that it does not break anymore. ExtensionThis is done at 72 °C. This is the ideal temperature for working with polymerase. The primers, which are complementary to the template, already have a strong ionic attraction to the template. This force is stronger than the forces breaking these attractions i.e. the high temperature. Primers that are on positions with no exact match (non complementary) get loose again (because of the higher temperature) and dont give an extension of the fragment. The nucleotide bases are added from the 5 end to the 3 end. The phosphate group of the dNTPs is coupled with the hydroxyl group of the extending DNA strand. The extension time depends on two factors; the type of polymerase used and the length of the DNA fragment to be amplified. Usually, Taq polymerase adds dNTPs at the rate of about 1000 bases per minute. It is important to realize that each component of the PCR including the input DNA, the oligonucleotide primers, the thermostable polymerase, the buffer and the cycling parameters has a profound impact on the sensitivity, specificity and fidelity of the reaction. The three steps of the first cycle are shown, that is, denaturation, annealing and extension. At the end of the first cycle, two strands have been synthesized. At the end of the second cycle, four strands have been synthesized (the three steps of the cycle have not been shown). At the end of the third cycle, eight strands have been synthesized. The number of strands increases exponentially with each cycle. Nuggets The Polymerase Chain Reaction is essentially a cell-free method of cloning DNA and RNA. There are three steps involved in every cycle; these are denaturation, annealing and extension. At the end of each cycle, the DNA doubles. Therefore, theoretically, if there are n cycles in a reaction, the number of DNA fragments at the end of the reaction will be 2n. COMPONENTS OF THE POLYMERASE CHAIN REACTION The components that are essential for a successful PCR are elaborated here. TEMPLATE DNA This is that portion of the DNA/gene that is to be amplified. Usually the concentration is  100 ng genomic DNA per PCR reaction. However, this can vary depending on the target gene concentration and the source of DNA. The PCR reaction is inherently sensitive. It is not necessary for the template DNA to be abundant or highly purified. Higher amounts of template DNA can increase the yield of nonspecific PCR products, but if the fidelity of the reaction is crucial, one should limit both template DNA quantities as well as the number of PCR cycles. DNA in solution may contain a large number of contaminants. These contaminants may inhibit the PCR. Some of these reagents are phenol, EDTA, and proteinase K, which can inhibit Taq DNA polymerase. However, isopropanol precipitation of DNA and washing of DNA pellets with 70% ethanol is usually effective in removing traces of contaminants from the DNA sample. Effects of Fixation This is of particular interest to the pathologist since he has to deal with formalin fixed tissue. DNA extracted from fresh tissue or cell suspensions forms an optimal template for PCR. The tissue is best stored at -70 °C at which the nucleic acids can be stored indefinitely. A temperature of -20 °C is sufficient to preserve the DNA for several months and at 4 °C, the DNA can be stored for several weeks. At room temperature, the DNA has been successfully stored for hours to days; however, mitochondrial DNA is very sensitive to temperature and may degrade in thawed tissues. DNA extracted from fixed tissue has been used successfully for PCR. The type of fixative and the duration of fixation are of critical importance. Non crosslinking fixatives like ethanol provide the best DNA. Formaldehyde is variable in its DNA yield. Carnoys, Zenkers and Bouins are poor fixatives as far as DNA preservation is concerned. Not surprisingly, formaldehyde is the fixative which has been evaluated the most, because it is more commonly used worldwide. The studies have demonstrated that a successful PCR depends on the protocol to extract the DNA and the length of fixation. Formaldehyde reacts with DNA and proteins to form labile hydroxymethyl intermediates which give rise to a mixture of end products which include DNA-DNA and DNA-protein adducts. Purification of DNA from formalin fixed tissue, therefore, includes heating to reverse the hydroxymethyl additions and treatment with a proteinase to hydrolyze the covalently linked proteins. However, there is no way to reverse the DNA-DNA links and these links inhibit the DNA polymerases. This accounts for the low PCR yield which is seen with formalin fixed tissue. Usually, the PCR reaction with formalin fixed DNA as a template yields products which are not more than 600 bp in size. Nuggets Template DNA is required in a concentration of 100ng for each PCR reaction. Contaminants in DNA may inhibit the reaction. Fixation of tissues provides DNA which is not as good as DNA obtained from fresh/ frozen tissues. Different fixatives give different DNA yields. Alcohol is the best fixative and Carnoys, Zenkers and Bouins are poor fixatives as far as DNA preservation is concerned. Formalin is intermediate in DNA yield. Purification of DNA from formalin fixed tissue involves heating to reverse the attachment of hydroxymethyl intermediates and treatment with a proteinase to hydrolyze the covalently linked proteins. The DNA obtained after fixation can be used for reactions in which the PCR product is not more than 600 bp. PCR BUFFER The purpose of using buffers in PCR is to provide optimum pH and potassium ion concentration for the DNA polymerase enzyme (usually obtained from bacteria Thermus aquaticus) to function. Most buffers are available in a 10X concentration and require dilution before use. Although most protocols recommend the final buffer concentration of 1X, a concentration of 1.5X might result in increased PCR product yield. The PCR buffer contains many components. Some important ones are discussed here: Divalent and monovalent cations These are required by all thermostable DNA polymerases. Mg2+ is the divalent cation that is usually present in most of the PCR buffers. Some polymerases also work with buffers containing Mn2+. Calcium containing buffers are ineffective and therefore, rarely used. Buffers can be divided into first and second generation buffers on the basis of their ionic component. The second generation buffers, as opposed to first generation buffers, also contain (NH4)2SO4 and permit consistent PCR product yield and specificity over a wide range of magnesium concentration (1.0 to 4.0 mM MgCl2). The overall specificity and yield of PCR products is better with second generation buffers, as compared with first generation PCR buffers. Buffers also contain KCl. Salts like KCl and NaCl may help to facilitate primer annealing, but concentration of 50 mM will inhibit Taq polymerase activity. Interactions between K+ and NH4+ allow specific primer hybridization over a broad ran ge of temperatures. Magnesium is one of the most important components of the buffer. Mg2+ ions form a soluble complex with dNTPs which is essential for dNTP incorporation; they also stimulate polymerase activity and influence the annealing efficiency of primer to template DNA. The concentration of MgCl2 can have a dramatic effect on the specificity and yield of PCR products. Optimal concentration of MgCl2 is between 1.0 to 1.5 mM for most reactions. Low MgCl2 concentration helps to eliminate non-specific priming and formation of background PCR products. This is desirable when fidelity of DNA synthesis is critical. At the same time, however, too few Mg2+ ions can result in low yield of PCR products. High MgCl2 concentration helps to stabilize interaction of the primers with their intended template, but can also result in nonspecific binding and formation of non specific PCR products. It is important to be aware that many PCR buffers (often sold in 10X stocks) already contain some amo unt of MgCl2. Therefore, the addition of further amounts must be carefully monitored. In the best possible scenario, the PCR would work well with the amount of Mg2+ already present in the buffer solution. However, if this does not occur, it is necessary to standardize the amount of Mg2+ in the reaction mix. This can be difficult because the dNTPs and the oligonucleotide primers bind to Mg2+. Therefore, the molar concentration of Mg2+ must exceed the molar concentration of the phosphate groups contributed by dNTPs and the primers. As a rule of thumb, the magnesium concentration in the reaction mixture is generally 0.5 to 2.5 mM greater than the concentration of dNTPs. The optimal concentration of Mg2+ should, therefore, be standardized for each reaction. Tris-Cl The concentration of tris-Cl is adjusted so that the pH of the reaction mixture is maintained between 8.3 and 8.8 at room temperature. In standard PCR reactions, it is usually present in a concentration of 10mM. When incubated at 72 °C which is the temperature for extension, the pH of the reaction mixture falls by more than a full unit, producing a buffer whose pH is 7.2. Other components Some buffers also contain components like BSA (Bovine serum albumin) and DMSO (dimethyl sulphoxide). BSA reduces the amount of template sticking to the side of the tube, making it available for amplification and reducing the risk of primer dimer. Primer dimers are products obtained when the primers anneal to each other instead to to the template DNA. DMSO has been shown to facilitate DNA strand separation (in GC rich difficult secondary structures) because it disrupts base pairing and has been shown to improve PCR efficiency. In effect, it is wise not to tamper with the buffer provided with the Taq polymerase. The buffer is usually standardized for the vial of Taq and there is no need to add additional MgCl2 or stabilizers like DMSO and BSA. However, some Taq buffers come with the buffer in one vial and MgCl2 in a separate vial. Under such circumstances, it is advisable to start with 1 µL of MgCl2 and increase its concentration in aliquots of 0.5  µL, if the initial reaction fails. Nuggets The PCR buffer contains divalent and monovalent cations, Tris Cl and other components. The PCR buffer is used to give the correct pH and potassium concentration for the DNA polymerase to function. The most common divalent ion used is magnesium in the form of MgCl2. MgCl2 concentration is vital for PCR. Tris Cl is used to maintain the pH between 8.3 and 8.8 at room temperature. Salts like NaCl and KCl may facilitate primer annealing Other components like BSA and DMSO help to increase the sensitivity and specificity of the reaction. OLIGONUCLEOTIDE PRIMERS What are Oligonucleotide Primers? PCR primers are short fragments of single stranded DNA (17-30 nucleotides in length) that are complementary to DNA sequences that flank the target region of interest. The purpose of PCR primers is to provide a free 3-OH group to which the DNA polymerase can add dNTPs. There are two primers used in the reaction. The forward primer anneals to the DNA minus strand and directs synthesis in a 5 to 3 direction. The sequence of primers is always represented in a 5 to a 3 direction. The reverse primer anneals to the other strand of the DNA. How to design a primer? The predominant goal kept in mind while designing a primer is specificity. Each member of the primer must anneal in a stable fashion to its target sequence in the template DNA. The longer the primer, the higher is its specificity. Unfortunately, the longer the primer, the less likely it is to anneal to a particular sequence in the template DNA. Conversely, if the primer length is small, it is likely to anneal, but its specificity will be poor. A compromise is reached by designing primers between 20 and 25 nucleotides long. Inclusion of less than 17 nucleotides often leads to non specific annealing, while presence of more than 25 nucleotides may not allow annealing to occur at all. Remember that the DNA sequence in the human genome appears to be a random sequence of nucleotides. When designing primers, it is important to calculate the probability that a sequence exactly complementary to a string of nucleotides in the human genome will occur by chance. Several formulae are designed to calculate such probabilities. However, mathematical expressions are not necessarily correct and in this case, the predictions maybe wildly wrong. The distribution of codons is non random with repetitive DNA sequences and gene families. It is advisable to use primers longer than the statistically indicated minimum. It is also advisable to scan DNA databases to check if the proposed sequence occurs only in the desired gene. For a practicing pathologist, it is best not to attempt designing of primers. What a pathologist requires is the primer sequence for an established test. If, for example, a pathologist requires primer sequence for the diagnosis of sickle cell anemia, all he has to do is search the web for papers related to molecular testing of sickle cell anemia. The primer sequences will be provided in the paper. Custom made primers can be commercially synthesized. Several biotechnology companies provide this facility. Before the primers are ordered, it is essential to check that the sequence is correct and that there are no missing nucleotides in the sequence. That is where, BLAST is invaluable. Before the intricacies of the BLAST search are elaborated upon, it is necessary to mention that designing a primer does not depend only on the sequence of nucleotides. Other factors like the GC content and melting point are also important considerations. They will be dealt with later in the chapter. BLAST and its uses BLAST is an acronym for Basic Local Alignment Search Tool. It is an algorithm comparing information about primary biological sequences with a library or database of sequences. A BLAST can be performed for different organisms, but in this book, we will concern ourselves with nucleotide BLAST in humans only. BLAST searches the database for sequences similar to the sequence of interest (the query sequence) by using a 2-step approach. The basic concept is that the higher the number of similar segments between two sequences, and the longer the length of similar segments, the less divergent the sequences are, and therefore, likely to be more genetically related (homologous). Before perfoming a BLAST search the oligonucleotide sequence is first identified. The sequence is fed into the programme. BLAST first searches for short regions of a given length called words (W). It then searches for substrings which are compared to the query sequence. The program then aligns with sequences in the database (target sequences), using a substitution matrix. For every pair of sequences (query and target) that have a word or words in common, BLAST extends the search in both directions to find alignments that score greater (are more similar) than a certain score threshold (S). These alignments are called high scoring pairs or HSPs; the maximal scoring HSPs are called maximum segment pairs (MSPs). The BLAST search as outlined in fig 7.2 shows the results of the search. If we scroll down further, we can see the sequences producing significant alignments. Note that in this BLAST search, there are 49 BLAST hits in the query sequence. In the list shown in figure 7.2, there is a list of hits starting with the best (most similar). To the right of the screen is the E-value. This is the expected number of chance alignments; the lower the E value, the more significant the score. First in the list is the sequence finding itself, which obviously has the best score. To the left is the accession number. This refers to a unique code that identifies a sequence in a database. It is important to know that there is no set cut-off that determines whether a match is significant or similar enough. This must be determined according to the goals of the project. The sequences provided in the figure 7.2 show a significant alignment with Pseudomonas japonica. It shows a high score (bits) and a low E-value. Note that the lower the E value, the greater the likelihood that the sequence is a good match. BLAST output can be delivered in a variety of formats. These formats include HTML, plain text and XML formatting. For the NCBIs web-page, the default format for output is HTML. When performing a BLAST on NCBI (National Centre for Biotechnology Information), the results are displayed in a graphical format showing the following: The hits found A tabular form showing sequence identifiers for the hits with scoring related data Alignments for the sequence of interest and the hits received with corresponding BLAST scores for these. The easiest to read and most informative of these is probably the table. The main idea of BLAST is that there are often high-scoring segment pairs (HSP) in a statistically significant alignment. BLAST searches for these high scoring sequence alignments between the query sequence and the sequences in the database. The speed and relatively good accuracy of BLAST are among the key technical innovations of the BLAST programs. Sequence of events to be followed when performing a BLAST search.: Go to PUBMED (http://www.ncbi.nlm.nih.gov/pubmed/) Scroll down to reach a heading called POPULAR Under POPULAR click on BLAST Click on nucleotide blast Under the heading, enter accession number(s), gi(s), or FASTA sequence(s), type or paste the sequence that you want matched. Click BLAST Wait for the results. Analyse the nucleotide sequence as it appears. Calculation of Melting Temperature The melting temperature or Tm is a measure of stability of the duplex formed by the primer and the complementary target DNA sequence and is an important consideration in primer design. Tm corresponds to the midpoint in transition of DNA from the double stranded to its single stranded form. A higher Tm permits an increased annealing temperature that makes sure that the annealing between the target DNA and the primer is specific. The Tm is dependent on the length of the oligonucleotides and the G+C content of the primer. The formula for calculation of Tm is given in table 7.1. Table 7.1: Formula for calculation of the melting temperature. Length of Primer Tm ( °C) Less than 20 nucleotides long 2(effective length*) 20 to 35 nucleotides long 22 + 1.46(effective length) *Effective length = 2(number of G+C) + number of (A + T) Primers are usually designed to avoid matching repetitive DNA sequences. This includes repeats of a single nucleotide.. The two primers in a PCR reaction are not homologous to each other and their complementarity can lead to formation of spurious amplification artifacts called primer dimers. The 3 end of a primer is most critical for initiating polymerization. The rules for selecting primers in addition to those already mentioned are as follows: The C and G nucleotides should be distributed uniformly throughout the primer and comprise approximately 40% of the bases. More than three G or C nucleotides at the 3-end of the primer should be avoided, as nonspecific priming may occur. The primer should be neither self-complementary nor complementary to any other primer in the reaction mixture, in order to avoid formation of primer-dimer or hairpin-like structure. All possible sites of complementarity between the primer and the template DNA should be noted. The melting temperature of flanking primers should not differ by more than 5 °C. Therefore, the G+C content and length must be chosen accordingly (a higher G+C content means a higher melting temperature). The PCR annealing temperature (TA) should be approximately 5 °C lower than the primer melting temperature. G+C content in each primer should not be more than 60% to avoid formation of internal secondary structures and long stretches of any one base. Primer extension will occur during the annealing step. Primers are always present in an excess concentration in conventional (symmetric) PCR amplification and, typically, are within the range of 0.1M to 1M. It is generally advisable to use purified oligomers of the highest chemical integrity. Primer Dimers A Primer Dimer (PD) consists of primer molecules that have attached or hybridized to each other because of strings of complementary bases in the primers. As a result, the DNA polymerase amplifies the PD, leading to competition for PCR reagents, thus potentially inhibiting amplification of the DNA sequence targeted for PCR amplification. In the first step of primer dimer formation, two primers anneal at their respective 3 ends. The DNA polymerase will bind and extend the primers. In the third step, a single strand of the product of step II is used as a template to which fresh primers anneal leading to synthesis of more PD product. Primer dimers may be visible after gel electrophoresis of the PCR product. In ethidium bromide stained gels, they are typically seen as 30-50 base-pair (bp) bands or smears of moderate to high intensity. They can be easily distinguished from the band of the target sequence, which is typically longer than 50 bp. One approach to prevent PD formation consists of physical-chemical optimization of the PCR system, i.e., changing the concentration of primers, MgCl2, nucleotides, ionic strength and temperature of the reaction. Reducing PD formation may also result in reduced PCR efficiency. To overcome this limitation, other methods aim to reduce the formation of PDs only. These include primer design, and use of different PCR enzyme systems or reagents. Nuggets Oligonucleotide primers are short fragments of single stranded DNA (17-30 nucleotides in length) that are complementary to DNA sequences that flank the target region of interest. They dictate which region of DNA in the PCR will be amplified. Primer sequences can be obtained by reviewing previously published literature. A confirmation of the sequence can be done by using BLAST (Basic Local Alignment Search Tool). The melting temperature is the midpoint in the observed transition from a double stranded to a single stranded form. A higher annealing temperature ensures that the annealing between the target DNA and the primer is specific. A primer dimer consists of primer molecules that have attached or hybridized to each other because of strings of complementary bases in the primers. Taq polymerase amplifies the primer dimer leading to competition for the PCR products. Several methods are used to reduce primer dimer formation including changing the concentrations of primers, MgCl2, nucleotides, ionic strength and temperature of the reaction. TAQ DNA POLYMERASE The initial PCR reaction used the Klenow fragment of Escherichia coli DNA polymerase. However, this was unstable at high temperatures and it was necessary to add a fresh aliquot of enzyme after every denaturation step. The annealing and extension temperatures had to be kept low and as a result, there was formation of non specific products in abundance. The discovery of the thermostable Taq DNA polymerases ensured that the PCR did not remain a laboratory curiosity. The extension and annealing temperatures could now be kept high and the formation of non specific products was greatly reduced. Taq became famous for its use in the polymerase chain reaction and was called the Molecule of the Year by the journal Science. Why Taq? Taq is the enzyme of choice in PCR because of the following reasons: Taq works best at 75 °C80 °C, allowing the elongation step to occur at temperatures which make non-Watson-Crick base pairing a rare event. It can add upto 1,000 nucleoside triphosphates to a growing DNA strand. Taq has a half-life of 40 minutes at 95 °C and 9 minutes at 97.5 °C, and can replicate a 1000 base pair strand of DNA in less than 10 seconds at 72 °C. Because of all these properties, Taq is the enzyme of choice in the PCR. How does Taq polymerase act? The first requirement is a primer. The primer is annealed to the template strand having free hydroxyl group at its 3 end. During the extension phase, the Taq synthesizes a new DNA strand complementary to the template by adding dNTPs in a 5 to 3 direction condensing the 5 phosphate group of the dNTPs with the 3 hydroxyl group of the end of the extending DNA strand. Since Taq works best between 70 °C- 80 °C, a temperature of 72 °C is usually chosen as the optimum annealing temperature. Where does Taq come from? In Thermus aquaticus, Taq polymerase is expressed at very low levels and commercial production is not economically viable. However, the enzyme can now be produced from different versions of the engineered Taq gene so as to obtain high levels of expression in E coli. What other polymerases are available for use in PCR? Taq is not the only polymerase; other polymerases are available but Taq is the one that is generally used in a PCR. A few other polymerases with their uses are as follows: PFU DNA polymerase -Found in Pyrococcus furiosus, it functions in vivo to replicate the organisms DNA. The main difference between Pfu and alternative enzymes is the Pfus superior thermostability and proofreading properties compared to other thermostable polymerases. Unlike Taq DNA polymerase, Pfu DNA polymerase possesses 3 to 5 exonuclease proofreading activity, meaning that it works its way along the DNA from the 3 end to the 5 end and corrects nucleotide-misincorporation errors. This means that Pfu DNA polymerase-generated PCR fragments will have fewer errors than Taq-generated PCR inserts. As a result, Pfu is more commonly used for molecular cloning of PCR fragments than the historically popular Taq. However, Pfu is slower and typically requires 1-2 minutes to amplify 1kb of DNA at 72 ° C. Pfu can also be used in conjunction with Taq polymerase to obtain the fidelity of Pfu with the speed of Taq polymerase activity. TFL DNA polymerase Obtained from Thermus flavus, it is useful for the amplification of large segments of DNA. WHAT IS FIDELITY? All DNA polymerases have an intrinsic error rate that is highly dependant on the buffer composition, pH of the buffer, dNTP concentration and the sequence of the template itself. The types of errors that are introduced are frameshift mutations, single base pair substitutions, and spontaneous rearrangements. Therefore, the PCR reaction generates a product that is very similar, but in many cases, not identical to the original sequence. The quantity of dissimilar product obtained is obviously related to the cycle in which the mismatch took place. Under normal circumstances, this does not make any difference; however, these errors may become significant during sequencing when the role of fidelity comes into play. Fidelity is the ability of the polymerases to avoid the incorporation of wrong nucleotides during the reaction. Under normal circumstances, it really does not make a difference if a wrong nucleotide is incorporated because the size of the PCR product remains the same and that is what we have to look for. However, there are some polymerases like Pfu which have a high fidelity. In addition to reading from the 5 to the 3 direction, they can also read from the 3 to the 5 direction and correct the wrong nucleotides wh

Saturday, January 18, 2020

Functional Requirements

1.Functional Requirements Functional requirements define the fundamental actions that system must perform.The functional requirements for the system are divided into three main categories, Reservation/Booking, Food, and Management. For further details, refer to the use cases. EXAMPLE 1.1. Reservation/Booking 1.1. The system shall record reservations. 1.2. The system shall record the customer's first name. 1.3. The system shall record the customer's last name. 1.4. The system shall record the number of occupants. 1.5. The system shall record the room number. 1.6. The system shall display the default room rate. 1.6.1. The system shall allow the default room rate to be changed. 1.6.2. The system shall require a comment to be entered, describing the reason for changing the default room rate. 1.7. The system shall record the customer's phone number. 1.8. The system shall display whether or not the room is guaranteed. 1.9. The system shall generate a unique confirmation number for each reservation. 1.10. The system shall automatically cancel non-guaranteed reservations if the customer has not provided their credit card number by 6:00 pm on the check-in date. EXAMPLE 22.Food 2.1. The system shall track all meals purchased in the hotel (restaurant and room service). 2.2. The system shall record payment and payment type for meals. 2.3. The system shall bill the current room if payment is not made at time of service.The system shall accept reservations for the restaurant and room service. EXAMPLE 33. Management 3.1. The system shall display the hotel occupancy for a specified period of time (days; including past, present, and future dates). 3.2. The system shall display projected occupancy for a period of time (days). 3.3. The system shall display room revenue for a specified period of time (days). 3.4. The system shall display food revenue for a specified period of time (days). 3.5. The system shall display an exception report, showing where default room and food prices have been overridden. 3.6. The system shall allow for the addition of information, regarding rooms, rates, menu items, prices, and user profiles. 3.7. The system shall allow for the deletion of information, regarding rooms, rates, menu items, prices, and user profiles. 3.8. The system shall allow for the modification of information, regarding rooms, rates, menu items, prices, and user profiles. 3.9. The system shall allow managers to assign user passwords. 2 Nonfunctional Requirements Functional requirements define the needs in terms of performance, logical database requirements, design constraints, standards compliance, reliability, availability, security, maintainability, and portability. EXAMPLE 1Performance Requirements Performance requirements define acceptable response times for system functionality.The load time for user interface screens shall take no longer than two seconds.The log in information shall be verified within five seconds.Queries shall return results within five seconds.Example Logical Database Requirements The logical database requirements include the retention of the following data elements. This list is not a complete list and is designed as a starting point for developmentBooking/Reservation SystemCustomer first nameCustomer last nameCustomer addressCustomer phone numberNumber of occupantsAssigned roomDefault room rateRate descriptionGuaranteed room (yes/no)Credit card numberConfirmation numberAutomatic cancellation dateExpected check-in dateExpected check-in timeActual check-in dateActual check-in timeExpected check-out dateExpected check-out timeActual check-out dateActual check-out timeCustomer feedbackPayment received (yes/no)Payment typeTotal BillFood ServicesMealMeal typeMeal itemMeal orderMeal payment (Bill to room/Credit/Check/Cash)EXAMPLE 3Design Constraints The Hotel Management System shall be a stand-alone system running in a Windows environment. The system shall be developed using Java and an Access or Oracle database3. Illustrate a timeframe needed to complete each task based on the requirements from question 2.(5 Marks)Answer Estimating time framesTo manage your time well, you should know not only what tasks you need to accomplish, but also when those tasks must be completed and how long they'll take. Making accurate estimates about how long a task will take is one of the keys to effective time management. Many management problems are the result of unrealistic estimates of how long it will take to complete specific tasks.If you estimate time frames accurately, you'll be able to schedule work efficiently and meet deadlines:†¢ schedule work efficiently – Accurate estimates about how long tasks will take to complete make scheduling a lot easier. They ensure that you won't have to keep changing your schedule. If you have a task that you accurately estimate will take six hours, for example, you can allot that time in your schedule and be reasonably confident you won't have to change the schedule. But what if you didn't accurately estimate the time for that task and allotted it only three hours? It would throw your schedule off, and you'd need to rework it.†¢ meet deadlines – If you're accurate in estimating the time it will take to complete tasks, you'll be better able to meet your deadlines. If you're estimates aren't accurate, you may need to ask to change deadlines or disappoint others who are relying on you to complete certain tasks. With accurate time estimates, you'll also be more confident about setting deadlines because you know that the time you assign for completing each of your tasks is realistic.Time estimate equationIt's important to estimate the time frames for your tasks accurately so that you can schedule all your work effectively and meet deadlines. To go about doing this, you first need to know the requirements of each task and your experience with activities – both when they run smoothly and when they don't – to produce three time estimates:†¢ The likely time is the time that the task normally takes you to complete. It helps to consider the time it takes to complete the task without interruption. You should also think about a time frame you would be comfortable with based on your workload, the task, and any external factors that may delay or speed up the completion of the task.†¢ The shortest time is the least amount of time that you have taken to complete the task in the past. It may also refer to the shortest time in which you think you can complete the task if there are no interruptions or distractions.†¢ You can estimate the longest time by considering what may go wrong when performing the task and then adding this extra time to the task's likely duration. This estimate should be based on your experience of this type of activity in the past, as well as on any foreseeable difficulties.You use the three time estimates to calculate the shortest possible time to complete a task based on an average of the likely, shortest, and longest times. Because in most cases a task will take the likely time to complete, this time is given more weight. You need to multiply it by 4, add the shortest time, and then add the longest time. You divide the total by 6 to get the shortest possible time.One important thing to remember is that you must use the same measurements for each type of time. For example, if your likely time is a number of days, the shortest and longest times must also be in days. If your estimates are in different measurements, start by changing them so they are all the same. The time frames equation often produces a shortest possible time that is longer than the shortest time you put into the equation. This is because the equation helps ensure that you're realistic about how long things will take.To manage your time effectively, you have to estimate the time it will take to complete each of your tasks. Doing this ensures you can schedule your work appropriately and meet all your deadlines. To estimate the time frames for your tasks, you can use a simple time frames equation, which uses estimates for the likely, shortest, and longest times to calculate the realistic, shortest possible time that it will take to complete a task.https://library.skillport.com/courseware/Content/cca/pd_11_a02_bs_enus/output/html/sb/sbpd_11_a02_bs_enus002005.html4. Identify and explain five (5) threats to your business that you need to consider for the success of this system.Answer: After assessing the strengths and weaknesses of your business for your business plan, look for external forces, like opportunities and threats, that may have an effect on its destiny. These changes includeThe appearance of new or stronger competitorsThe emergence of unique technologiesShifts in the size or demographic composition of your market areaChanges in the economy that affect customer buying habitsChanges in customer preferences that affect buying habitsChanges that alter the way customers access your businessChanges in politics, policies, and regulationsFads and fashion crazesList the threats and opportunities facing your business, and follow these guidelines:When listing opportunities, consider emerging technologies, availability of new materials, new customer categories, changing customer tastes, market growth, new uses for old products (think about how mobile phones and even eyeglasses now double as cameras and computers), new distribution or location opportunities, positive changes in your competitive environment, and other forces that can affect your success.When listing threats, consider the impact of shrinking markets, altered consumer tastes and pu rchase tendencies, raw material shortages, economic downturns, new regulations, changes that affect access to your business, and competitive threats, including new competing businesses and competitive mergers and alliances. Also think about the impact of expiring patents, labor issues, global issues, and new products that may make your offering outdated or unnecessary.If you're having a tough time getting specific, look back at the strengths and weaknesses, but this time, use it to list strengths and weaknesses of a competitor. You won't know as much about your competitor's capabilities as you know about your own, but you probably know enough to flag areas of strength and weakness. Your competitor's strengths are potential threats to your business, and its weaknesses present potential opportunities.http://www.dummies.com/business/start-a-business/business-plans/how-to-identify-opportunities-and-threats-in-business-planning/5. Write down three (3) elements of risk and two (2) example each that relate to the project.(9 Marks)Answer. All risk management standards agree that the goal of risk management is to enhance the chances of success of the relevant endeavor. However, each of them provides a different definition of risk: ISO31000:2009 calls it â€Å"effect of uncertainty on objectives,† the PMI â€Å"PMBOK Guide† has â€Å"an uncertain event or condition that, if it occurs, has a positive or negative effect on the project's objectives,† and the preferred Risk Doctor definition is â€Å"uncertainty that matters.†Each description is true, but only partly so. This matters because, until we know what we are dealing with, we cannot manage it in the best way possible:If we use the ISO definition, then our first thought will be to focus on the effect;If we follow PMI, then we will start from the potential occurrence;With the Risk Doctor definition, we start from uncertainty.Each of these — the effect, the event and the uncertainty — is a component of risk, but on its own is not a risk.Even taken in pairs they do not provide the full picture:an effect plus an event is an issue;an event plus an uncertainty is a prediction;†¢ an uncertainty plus and effect is a concern.It is only when you put all three together that you can see what a risk is made of, and use this information to decide on what, if anything, to do about it. Of course, this then requires a longer definition, but the goal enhancing the chances of success is worth the effort.But what is â€Å"success†? It is more than simply â€Å"meeting objectives;† it must also include the condition of â€Å"complying with project constraints† in order for the final result to remain within scope. Given this clarification, a more complete definition is: â€Å"Risk consists of three parts: an uncertain situation, the likelihood of occurrence of the situation, and the effect (positive or negative)that the occurrence would have on project success.†The three-part definition helps with three important stages of the risk management process: In 1.risk identification, it supports the structured description of a risk (â€Å"risk metalanguage†) in the form: â€Å"Because of , may occur, leading to In 2.risk evaluation, knowledge of potential causes allows you to evaluate the likelihood; identification of effects provides a basis for quantifying the impact.In 3 risk response planning, the different parts of the definition suggest different response approaches:for threat avoidance, understanding the situation may allow you to stop it happening or protect against its results;understanding the situation can also be used to help us exploit opportunities;in risk transfer or sharing, we seek a partner better equipped to address the effect;for threat reduction or opportunity enhancement, we focus on the effect and/or the likelihood;in risk acceptance, any contingency plan has to address the effect.Including these three components when you describe risks (the uncertainty, the event and the effect) will help everyone involved in risk management to take account of these three important aspects of risk, and act on them to enhance the chances of success.EXAMPLETwo examples of Managing risk in hote