Create a Web page that contains

  

Create a Web page that contains a text box which users can enter a date. Also include a button that executes the test() method to validate the date against a regular expression. Write a regular expression pattern that allows users to enter a one or two digit month, one or two digit date, and two or four digit year. Also, allow users to separate the month, day, and year by using either dashes or forward slashes. Users should be able to enter any of the following date formats: 1-25-07, 1-25-2007, or 01/25/2007.

BI-20HR

Discussion Question (Chapter-8)

  1. How does prescriptive analytics relate to descriptive and predictive analytics?
  2. Explain the differences between static and dynamic models. How can one evolve into the other?
  3. What is the difference between an optimistic approach and a pessimistic approach to decision making under assumed uncertainty?
  4. Explain why solving problems under uncertainty sometimes involves assuming that the problem is to be solved under conditions of risk.

10.What is the difference between decision analysis with a single goal and decision analysis with multiple goals (i.e., criteria)? Explain the difficulties that may arise when analyzing multiple goals.

Chapter -9

  1. What is Big Data? Why is it important? Where does Big Data come from?
  2. What do you think the future of Big Data will be? Will it lose its popularity to something else? If so, what will it be?
  3. What is Big Data analytics? How does it differ from regular analytics?
  4. What are the critical success factors for Big Data analytics?
  5. What are the big challenges that one should be mindful of when considering implementation of Big Data analytics?

Application Case9For these question I have upload the application case read that and answer the question)

Questions for the Opening Vignette

  1. What problem did customer service cancellation pose to AT’s business survival?
  2. Identify and explain the technical hurdles presented by the nature and characteristics of AT’s data.
  3. What is sessionizing? Why was it necessary for AT to sessionize its data?
  4. Research other studies where customer churn models have been employed. What types of variables were used in those studies? How is this vignette different?
  5. Besides Teradata Vantage, identify other popular Big Data analytics platforms that could handle the analysis described in the preceding case. 

Please read below to answer the question for the opening Vignette

A telecom company (named Access Telecom [AT] for privacy reasons) wanted to stem the tide of customers churning from its telecom services. Customer churn in the telecommunications industry is common. However, Access Telecom was losing customers at an alarming rate. Several reasons and potential solutions were attributed to this phenomenon. The management of the company realized that many cancellations involved communications between the customer service department and the customers. To this end, a task force comprising members from the customer relations office and the information technology (IT) department was assembled to explore the problem further. Their task was to explore how the problem of customer churn could be reduced based on an analysis of the customers’ communication patterns (Asamoah, Sharda, Zadeh, & Kalgotra, 2016).

Big Data Hurdles

Whenever a customer had a problem about issues such as their bill, plan, and call quality, they would contact the company in multiple ways. These included a call center, company Web site (contact us links), and physical service center walk-ins. Customers could cancel an account through one of these listed interactions. The company wanted to see if analyzing these customer interactions could yield any insights about the questions the customers asked or the contact channel(s) they used before canceling their account. The data generated because of these interactions were in both text and audio. So, AT would have to combine all the data into one location. The company explored the use of traditional platforms for data management but soon found they were not versatile enough to handle advanced data analysis in the scenario where there were multiple formats of data from multiple sources (Thusoo, Shao, & Anthony, 2010).

There were two major challenges in analyzing this data: multiple data sources leading to a variety of data and also a large volume of data.

  1. DATA FROM MULTIPLE SOURCES: Customers could connect with the company by accessing their accounts on the company’s Web site, allowing AT to generate Web log information on customer activity. The Web log track allowed the company to identify if and when a customer reviewed his/her current plan, submitted a complaint, or checked the bill online. At the customer service center, customers could also lodge a service complaint, request a plan change, or cancel the service. These activities were logged into the company’s transaction system and then the enterprise data warehouse. Last, a customer could call the customer service center on the phone and transact business just like he/she would do in person at a customer service center. Such transactions could involve a balance inquiry or an initiation of plan cancellation. Call logs were available in one system with a record of the reasons a customer was calling. For meaningful analysis to be performed, the individual data sets had to be converted into similar structured formats.
  2. DATA VOLUME: The second challenge was the sheer quantity of data from the three sources that had to be extracted, cleaned, restructured, and analyzed. Although previous data analytics projects mostly utilized a small sample set of data for analysis, AT decided to leverage the multiple variety and sources of data as well as the large volume of data recorded to generate as many insights as possible.

An analytical approach that could make use of all the channels and sources of data, although huge, would have the potential of generating rich and in-depth insights from the data to help curb the churn.

Solution

Teradata Vantage’s unified Big Data architecture (previously offered as Teradata Aster) was utilized to manage and analyze the large multistructured data. We will introduce Teradata Vantage in Section 9.8. A schematic of which data was combined is shown in Figure 9.1. Based on each data source, three tables were created with each table containing the following variables: customer ID, channel of communication, date/time stamp, and action taken. Prior to final cancellation of a service, the action-taken variable could be one or more of these 11 options (simplified for this case): present a bill dispute, request for plan upgrade, request for plan downgrade, perform profile update, view account summary, access customer support, view bill, review contract, access store locator function on the Web site, access frequently asked questions section on the Web site, or browse devices. The target of the analysis focused on finding the most common path resulting in a final service cancellation. The data was sessionized to group a string of events involving a particular customer into a defined time period (5 days over all the channels of communication) as one session. Finally, Vantage’s nPath time sequence function (operationalized in an SQL-MapReduce framework) was used to analyze common trends that led to a cancellation.

Figure 9.1 Full Alternative Text

Results

The initial results identified several routes that could lead to a request for service cancellation. The company determined thousands of routes that a customer may take to cancel service. A follow-up analysis was performed to identify the most frequent routes to cancellation requests. This was termed as the Golden Path. The top 20 most occurring paths that led to a cancellation were identified in both short and long terms. A sample is shown in Figure 9.2.

Figure 9.2 Full Alternative Text

This analysis helped the company identify a customer before they would cancel their service and offer incentives or at least escalate the problem resolution to a level where the customer’s path to cancellation did not materialize.

Digital forensics Paper

 Select a topic and start to research their current background, survey the related information, and analyze the structure of the requirements. Finally, you design the paper involving an appropriate project organization structure and surveyed topics based on APA format.

Then write the term paper.

My TOPIC –   Overview of Tools used in Digital Forensics 

 You have to follow the following format:

Title: Overview of Tools used in Digital Forensics 

Abstract:

  1. Introduction
  2. Background 
  3. Current Issues and Suggest Topics
  4. Methods, Techniques, and Evaluations
  5. Future Works
  6. Summary
  7. References

Assignment Instructions:

1. No ZIP file

2. The submitted assignment must be typed by ONE Single MS Word/PDF file

3. At least 10 pages and 3 references

4. Use 12-font size and 1.5 lines space

5. No more than 4 figures and 3 tables

6. Follow APA style and content format: TAMUC follows the APA (American Psychological Association) for writing style in all its courses which require a Paper or Essay.

http://www.apastyle.org/

Grading Rubric

 Grading for this assignment will be based on answers of completed the above requirements, quality, logic/organization of the paper, and language and writing skills. Please see as follows:

– Comprehension of Assignment (Addressed the question completely and thoroughly. Provided additional supporting evidence, demonstrating a full comprehension of subject matter): 20 percent

– Application of Course Knowledge and Content (Thorough technical application of course knowledge and content in a complete and concise manner): 20 percent

– Organization of Ideas (Original ideas are effectively developed and presented in a logical, sequential order throughout the entire assignment. Includes adequate and appropriate supporting evidence): 20 percent

– Writing Skills (Mechanics (spelling, grammar, and punctuation) are flawless, including proficient demonstration of citations and formatting throughout the entire assignment): 20 percent

– Research Skills (Accurate and applicable use of resources relevant to the subject matter that enhance the overall assignment): 20 percent

Java Programming

 

Submit the word processing document that contains the screen shots showing you have successfully executed the DiceSimulation.java program for the three cases: once where you used a while loop, a second time where you used a do-while loop, and a third time where you used a for loop.

Name the word processing document containing the screen shots that demonstrate successful execution of the DiceSimulation.java program in a file named ” XYLab2.docx“, where ” X” and ” Y” are your first and last initials.

Include a comment containing your full name in all Java source code files that you create or modify.

Cyber

   Need help with question. will give instructions once we communicate.   

Data Mining

1. What is data mining? In your answer, address the following:
a) Is it another hype?
b) Is it a simple transformation or application of technology developed from databases, statistics,
machine learning, and pattern recognition?
c) We have presented a view that data mining is the result of the evolution of database
technology. Do you think that data mining is also the result of the evolution of machine
learning research? Can you present such views based on the historical progress of this
discipline? Do the same for the fields of statistics and pattern recognition.
d) Describe the steps (1-2 lines each step) involved in data mining when viewed as a process of
knowledge discovery.
2. How is a data warehouse different from a database? How are they similar?
3. Define each of the following data mining functionalities: characterization, discrimination,
association and correlation analysis, classification, regression, clustering, and outlier analysis. Give
examples of each data mining functionality, using a real-life database that you are familiar with.
4. Present an example where data mining is crucial to the success of a business. What data mining
functionalities does this business need (e.g., think of the kinds of patterns that could be mined)?
5. Describe three challenges to data mining regarding data mining methodology and user interaction
issues.
6. What are the major challenges of mining a huge amount of data (such as billions of tuples) in
comparison with mining a small amount of data (such as a few hundred tuple data set)?
7. Briefly describe the following advanced database systems and applications: object-relational
databases, spatial databases, text databases, multimedia databases, the World Wide Web.
8. Outliers are often discarded as noise. However, one person’s garbage could be another’s treasure.
For example, exceptions in credit card transactions can help us detect the fraudulent use of credit
cards. Using fraudulence detection as an example, provide three similar examples where outliers are
important to detect.

question

 

Explain how a nonprofit organization like Doctors Without Borders might assign overhead costs to their operations in a particular refugee camp? (300 words) (10 points)

Chapter 4 – Having learned different kinds of costs and the systems used to accumulate them, what kinds of costs would a social media company like Twitter or Tik-Tok, would incur? Are there labor, material, and overhead costs? Are there fixed and variable costs and direct and indirect costs? What are the types of services produced by social media companies? And how do the costs incurred relate to the services they generate? (300 words) (10 points)

Chapter 5 – Assume you are the cost accountant of a company that manufactures a single product, would you advise your company to adopt an ABC system? Why or why not? Explain