Data Classification and Data Loss Prevention

 

Before you begin: review the information provided in this resource: https://www.pic.gov/sites/default/files/RACI%20Chart%20Overview%20v2.pdf

It takes a team of individuals throughout an organization who work together to safeguard the integrity and confidentiality of data resources. But, how does an organization know that it has enough people in the right roles performing the right tasks to ensure that digital assets will be protected from loss or harm?

The RACI matrix is a tool that can be used to outline the various roles and responsibilities required to provide this protection. For this discussion, you will prepare a RACI matrix that outlines roles of key players in the organization who have data protection responsibilities (i.e. asset security and data protection). Your matrix should specifically address executives (C-level), managers, supervisors, employees. The tasks that you should address are: generating information, using information, classifying information, and managing / using / implementing data loss prevention technologies.

After you have completed your chart, write a brief discussion of responsibilities for each role listed in the rows of your chart. Each role should be addressed in a separate paragraph.

Combine your matrix and your narrative descriptions into a briefing paper for a working group that has been charged with reviewing and improving the company’s data classification business processes. Post your paper in the body of your reply to this topic.

Provide in-text citations and references for 3 or more authoritative sources. Put the reference list at the end of your posting. Use a consistent and professional style for your citations and reference list entries. (Hanging indent is NOT required.)

Need Response to below discussion

Please read the below discussion posts and provide two responses in 50 to 75 words

Post#1

 

DNS Failover is designed to operate at the DNS level. That is, the level before a client connects to any of your servers. DNS essentially converts your domain name (e.g.www.example.com) into the IP address of your server. By monitoring applications and altering DNS dynamically so clients are pointed to different IP addresses, you can control traffic fairly easily and expensively. However, DNS failover does have two notable limitations: a) DNS Failover does not fix an outage when a client is already connected to an application. This is because their browser may not query DNS again for quite some time. b) DNS Failover has a TTL cache issue that could take anywhere from 1 to 30 minutes or more for the IP address change to be visible around the world. This is since many ISP’s recursive DNS servers cache longer than required to reduce traffic. “Time is an important component of the Domain Name System (DNS) and the DNS Security Extensions (DNSSEC). DNS caches rely on an absolute notion of time (e.g., “August 8, 2018 at 11:59pm”) to determine how long DNS records can be cached (i.e., their Time To Live (TTL)) and to determine the validity interval of DNSSEC signatures. This is especially interesting for two reasons” (Malhotra, 2019).

DNS Failover has been around for quite some time and is reliable. As a result, the sequential mode is the most popular for e-commerce applications or where back-end databases exist and some type of synchronization step is required before falling back to the primary. The cloud network monitors all of the available servers based on the monitoring criteria you specify via the management GUI and when it detects the primary down, it automatically fails over to the secondary IP (or if the secondary is down, fails over to the tertiary, etc.). In a typical static content scenario, when the primary comes back, it updates DNS again to send traffic to the primary. However, in an application where back-end databases must be synchronized, you can easily disable auto-failback to prevent the primary server from receiving traffic again. “A passive redundancy approach with diversification has been applied to intrusion-tolerant systems (ITSs) that aim to tolerate cyberattacks on a server” (Okamoto, 2017).

The monitoring system will still alert you that it is back online, but you have to manually force it to start receiving traffic again. This allows you to perform whatever tasks are required to ensure that the primary has the latest copy of the database or whatever is required. One critical drawback to using DNS failover in a situation where back-end synchronization is critical is when the primary is only down for a very short period. The short period would be sufficient to trigger a failover, but insufficient so that existing client connections (or new connections where the TTL cache has not expired) permit some clients to still hit the primary site. In this case, both the primary and secondary sites could receive traffic and create completely different databases that are very difficult to synchronize. “A DNS Load Balancer Daemon (LBD) has been developed at CERN as a cost-effective way to balance applications accepting DNS timing dynamics and not requiring persistence” (Reguero Naredo, 2017).

Post#2

 

            Cloud computing efficiency and security depends on several security concerns related to networks and connectivity. Protecting information system in cloud environment, from threats can be done largely by maintaining secure data traffic. The concepts of Domain Name System failover and Cloud failover come in this context of data traffic security. These two are most popular approaches in cloud security and share some similarities and dissimilarities between each other.

Comparison

            Functionality of DNS and cloud failover depends on the authentication of IP addresses largely. Both the preventive measures in cloud service security find out necessary action to recover from failures such as downtime recovery. Both the solutions need open access to public cloud in order to become active. The monitoring mechanism used in both the systems are almost equal and works in the same way for identification of servers and detection of faults (Naredo & Pardavila, 2017). DNS and Cloud failover are equally capable of delivering fast recovery as soon as within 60 seconds and can change functions automatically based on the requirement scenario.

The differences in cloud and DNS failover starts with the process through which these functions run. DNS finds easiest solution in re-routing of traffic in cases where a particular server fails, and it makes the traffic run through other active servers. A system embedded in the process helps in detecting the active server through a technique known as ‘round-robin’ method. A fault in DNS failover is that it contains the cached data which keeps revolving till one user’s Time to Live (TTL) in the server. “A DNS Load Balancer Daemon (LBD) has been developed at CERN as a cost-effective way to balance applications accepting DNS timing dynamics and not requiring persistence” (Naredo & Pardavila, 2017).

Cloud failover is popular for its accuracy and precision of carrying out the task necessary for sustained uptime. It does not send cached data back frequently. Sessions cared by cloud have better utilization capacities as it allows remote deployment unlike DNS failover where critical applications of a server are interrupted. To sustain through the TTLs, cloud failover uses session-wise load-balancing that proves to be more efficient than round-robin of DNS. Further, DNS failover fails to offer the flexibility due to the pre-determined conditioning that makes it a rigit system. Cloud, however, includes the DNS often for offering optimized result via a permanent proxy IP address. “DNS, being open-source, is less secure and it has no uncommon method for deciding if data has been intercepted while transmission or information of a domain name originates from an approved domain owner or not” (Ansari et. Al., 2020).

Conclusion

As a result of limitations, cloud failover service is costlier than DNS. There are steady improvements identifiable in DNS in global network. Some of the services are also allowing cache-free recovery of server. In spite of the advancements, it is yet to compete equally with cloud failover. On the other hand, cloud has its challenges too. “In spite of Cloud Computing services seem to be very attractive as an alternative to traditional on-premise data centers, there is still some concern about the providers availability” (Goncalves  & Fagotto, 2018).

There are several emerging concepts that are using Big Data and Blockchain Technology. Please search the internet and highlight 5 emerging concepts that are exploring the use of Blockchain and Big Data.

 

The Final Portfolio Project is a comprehensive assessment of what you have learned during this course.  

There are several emerging concepts that are using Big Data and Blockchain Technology. Please search the internet and highlight 5 emerging concepts that are exploring the use of Blockchain and Big Data.

Conclude your paper with a detailed conclusion section. 

The  paper needs to be approximately 6-8 pages long, including both a title  page and a references page (for a total of 8-10 pages). Be sure to use  proper APA formatting and citations to avoid plagiarism.

Your paper should meet these requirements:

  • Be approximately six to eight pages in length, not including the required cover page and reference page.
  • Follow APA 7 guidelines. Your paper should include an introduction, a body with fully developed content, and a conclusion.
  • Support your answers with the  readings from the course and at least two scholarly journal articles to  support your positions, claims, and observations, in addition to your  textbook. The UC Library is a great place to find resources.
  • Be clearly and well-written, concise,  and logical, using excellent grammar and style techniques. You are  being graded in part on the quality of your writing.

Opposing Shadow IT 2.0

Organizations do not always provide information systems that allow their staff to perform their responsibilities efficiently and effectively. Read the article, “Lifting the Veil Off Shadow IT.” Then, respond to the following:

  • Take a position favoring or opposing shadow IT.
  • If you are in favor, give one reason that shadow IT should be allowed. If you are not in favor, provide one way that the organization can reduce the risks of shadow IT.
    1000 words 

Portfolio Project

 You will respond to three separate prompts but prepare your paper as one research paper

1.Start your paper with an introductory paragraph.

2.Prompt 1 “Data Warehouse Architecture” (2-pages): Explain the major components of a data warehouse architecture, including the various forms of data transformations needed to prepare data for a data warehouse. Also, describe in your own words current key trends in data warehousing. 

3.Prompt 2 “Big Data” (2- pages): Describe your understanding of big data and give an example of how you’ve seen big data used either personally or professionally. In your view, what demands is big data placing on organizations and data management technology? 

4.Prompt 3 “Green Computing” (2 pages): IT Green Computing. The need for green computing is becoming more obvious considering the amount of power needed to drive our computers, servers, routers, switches, and data centers. Discuss ways in which organizations can make their data centers “green”. In your discussion, find an example of an organization that has already implemented IT green computing strategies successfully. Discuss that organization and share your link. 

5.Conclude your paper with a detailed conclusion section. 

The paper needs to be 8 pages long and 6 references. Be sure to use proper APA formatting and citations to avoid plagiarism.

Final Project (Research Paper) – Enterprise Risk Management

Risk management is one of the most important components in empowering an organization to achieve its ultimate vision. With proper risk management culture and knowledge, team members will be “speaking” the same language, and they will leverage common analytical abilities to identify and mitigate potential risks as well as exploit opportunities in a timely fashion. In order to consolidate efforts, the existence of an integrated framework is crucial. 

This is why an ERM is necessary to the fulfillment of any organization’s goals and objectives. In your final research project for the course, your task is to write a 7-10 page paper discussing the following concepts:

  • Introduction – What is an ERM?
  • Why Should an Organization Implement an ERM Application?
  • What are some Key Challenges and Solutions to Implementing an ERM?
  • What is Important for an Effective ERM?
  • Discuss at least one real organization that has been effective with implementing an ERM framework/application.
  • Conclusion – Final thoughts/future research/recommendation

The paper needs to be approximately 7-10 pages long, including both a title page and a references page (for a total of 9-12 pages). Be sure to use proper APA formatting and citations to avoid plagiarism.

Your paper should meet the following requirements:

  • Be approximately seven to ten pages in length, not including the required cover page and reference page.
  • Follow APA7 guidelines. Your paper should include an introduction, a body with fully developed content, and a conclusion.
  • Support your answers with the readings from the course, the course textbook, and at least FIVE scholarly journal articles to support your positions, claims, and observations, in addition to your textbook. The UC Library is a great place to find supplemental resources.
  • Be clearly and well-written, concise, and logical, using excellent grammar and style techniques. You are being graded in part on the quality of your writing.

Support and Software Deployment

Please respond to the following:

  • Your software has gone live and is in the production environment. The project gets handed over to the IT support team. Research support after software deployment. What are some of the challenges that can happen?