A management
information system (MIS) is a system that provides information needed to manage organizations
effectively. Management information systems are regarded to be a subset of the
overall internal controls procedures in a business, which cover the application of people,
documents, technologies, and procedures used by management
accountants to solve business problems such as costing a product, service or a
business-wide strategy. Management information systems are distinct from
regular information systems in that they are used to analyze other information
systems applied in operational activities in the organization.
Role and importance of Information Systems in management process
Initially in businesses and other organizations, internal
reporting was made manually and only periodically, as a by-product of the
accounting system and with some additional statistic(s), and gave limited and
delayed information on management performance. Previously, data had to be
separated individually by the people as per the requirement and necessity of
the organization. Later, data was distinguished from information, and so
instead of the collection of mass of data, important and to the point data that
is needed by the organization was stored.
Earlier, business computers were mostly used for relatively simple
operations such as tracking sales or payroll data, often without much detail.
Over time, these applications became more complex and began to store increasing
amount of information while also interlinking with previously separate information systems. As more
and more data was stored and linked man began to analyze this information into
further detail, creating entire management reports from the raw, stored data.
The term "MIS" arose to describe these kinds of applications, which
were developed to provide managers with information about sales, inventories,
and other data that would help in managing the enterprise. Today, the term is
used broadly in a number of contexts and includes (but is not limited to): decision support system, resource and people management applications, Enterprise
Resource Planning (ERP), Supply Chain Management (SCM), Customer Relationship Management (CRM), project management and database retrieval applications.
An 'MIS' is a planned system of the collection, processing,
storage of data in the form of information needed to carry out the management
functions. In a way, it is a documented report of the activities that were
planned and executed. According to Philip Kotler, "A marketing information system
consists of people, equipment, and procedures to gather, sort, analyze,
evaluate, and distribute needed, timely, and accurate information to marketing
decision makers.”
The terms MIS and information system are often confused. Information systems
include systems that are not intended for decision making. The area of study
called MIS is sometimes referred to, in a restrictive sense, as information technology management.
That area of study should not be confused with computer science. IT service management is a practitioner-focused discipline.
MIS has also some differences with ERP which incorporates elements that are not
necessarily focused on decision support.
The successful MIS must support a business's Five Year Plan or its
equivalent. It must provide for reports based upon performance analysis in
areas critical to that plan, with feedback loops that allow for titivation of
every aspect of the business, including recruitment and training regimens. In
effect, MIS must not only indicate how things are going, but why they are not
going as well as planned where that is the case. These reports would include
performance relative to cost centers and projects that drive profit or loss,
and do so in such a way that identifies individual accountability, and in
virtual real-time.
Components of
MIS
1.
Decision
Structure
a.
Structured
Decision –
Many analysts categorize decisions
according to the degree of structure involved in the decision-making activity.
Business analysts describe a structured decision as one in which all three
components of a decision—the data, process, and evaluation—are determined.
Since structured decisions are made on a regular basis in business
environments, it makes sense to place a comparatively rigid framework around
the decision and the people making it.
Structured decision support systems may
simply use a checklist or form to ensure that all necessary data is collected
and that the decision making process is not skewed by the absence of necessary
data. If the choice is also to support the procedural or process component of
the decision, then it is quite possible to develop a program either as part of
the checklist or form. In fact, it is also possible and desirable to develop
computer programs that collect and combine the data, thus giving the process a
high degree of consistency or structure. When there is a desire to make a
decision more structured, the support system for that decision is designed to
ensure consistency. Many firms that hire individuals without a great deal of
experience provide them with detailed guidelines on their decision making
activities and support them by giving them little flexibility. One interesting
consequence of making a decision more structured is that the liability for
inappropriate decisions is shifted from individual decision makers to the
larger company or organization.
b.
Unstructured
decision –
At the other end of the continuum are
unstructured decisions. While these decisions have the same components as
structured ones—data, process, and evaluation—there is little agreement on
their nature. With unstructured decisions, for example, each decision maker may
use different data and processes to reach a conclusion. In addition, because of
the nature of the decision there may only a limited number of people within the
organization that are even qualified to evaluate the decision.
Generally, unstructured decisions are
made in instances in which all elements of the business environment—customer
expectations, competitor response, cost of securing raw materials, etc.—are not
completely understood (new product and marketing strategy decisions commonly
fit into this category). Unstructured decision systems typically focus on the
individual or team that will make the decision. These decision makers are
usually entrusted with decisions that are unstructured because of their
experience or expertise, and therefore it is their individual ability that is
of value. One approach to support systems in this area is to construct a
program that simulates the process used by a particular individual. In essence,
these systems—commonly referred to as "expert systems"—prompt the
user with a series of questions regarding a decision situation. "Once the
expert system has sufficient information about the decision scenario, it uses
an inference engine which draws upon a data base of expertise in this decision
area to provide the manager with the best possible alternative for the
problem," explained Jatinder N.D. Gupta and Thomas M. Harris in the Journal
of Systems Management. "The purported advantage of this decision aid is
that it allows the manager the use of the collective knowledge of experts in
this decision realm. Some of the current DSS applications have included
long-range and strategic planning policy setting, new product planning, market
planning, cash flow management, operational planning and budgeting, and
portfolio management."
Another approach is to monitor and
document the process that was used so that the decision maker(s) can readily
review what has already been examined and concluded. An even more novel
approach used to support these decisions is to provide environments that are
specially designed to give these decision makers an atmosphere that is
conducive to their particular tastes. The key to support of unstructured
decisions is to understand the role that individuals experience or expertise
plays in the decision and to allow for individual approaches.
c.
Semi-Structured
Decisions –
In the middle of the continuum are
semi-structured decisions, and this is where most of what are considered to be
true decision support systems are focused. Decisions of this type are
characterized as having some agreement on the data, process, and/or evaluation
to be used, but are also typified by efforts to retain some level of human
judgement in the decision making process. An initial step in analyzing which
support system is required is to understand where the limitations of the
decision maker may be manifested (i.e., the data acquisition portion, the
process component, or the evaluation of outcomes).
Grappling with the latter two types of
decisions—unstructured and semi-structured—can be particularly problematic for
small businesses, which often have limited technological or work force
resources. As Gupta and Harris indicated, "many decision situations faced
by executives in small business are one-of-a-kind, one-shot occurrences
requiring specifically tailored solution approaches without the benefit of any
previously available rules or procedures. This unstructured or semi-structured
nature of these decisions situations aggravates the problem of limited
resources and staff expertise available to a small business executive to
analyze important decisions appropriately. Faced with this difficulty, an
executive in a small business must seek tools and techniques that do not demand
too much of his time and resources and are useful to make his life
easier." Subsequently, small businesses have increasingly turned to DSS to
provide them with assistance in business guidance and management.
2.
Decision Support
Systems
Decision support systems are a set of
manual or computer-based tools that assist in some decision-making activity. In
today's business environment, however, decision support systems (DSS) are
commonly understood to be computerized management information systems designed
to help business owners, executives, and managers resolve complicated business
problems and/or questions. Good decision support systems can help business
people perform a wide variety of functions, including cash flow analysis,
concept ranking, multistage fore-casting, product performance improvement, and
resource allocation analysis. Previously regarded as primarily a tool for big
companies, DSS has in recent years come to be recognized as a potentially
valuable tool for small business enterprises as well.
3.
Transaction
Processing Systems
A transaction processing system is a
type of information system. TPSs collect, store, modify, and retrieve the
transactions of an organization. A transaction is an event that generates or
modifies data that is eventually stored in an information system. E.g. if an
electronic payment is made, the amount must be both withdrawn from one account
and added to the other; it cannot complete only one of those steps. Either both
must occur, or neither. In case of a failure preventing transaction completion,
the partially executed transaction must be 'rolled back' by the TPS. While this
type of integrity must be provided also for batch transaction processing, it is
particularly important for online processing: if e.g. an airline seat
reservation system is accessed by multiple operators, after an empty seat
inquiry, the seat reservation data must be locked until the reservation is
made, otherwise another user may get the impression a seat is still free while
it is actually being booked at the time. Without proper transaction monitoring,
double bookings may occur. Other transaction monitor functions include deadlock
detection and resolution (deadlocks may be inevitable in certain cases of
cross-dependence on data), and transaction logging (in 'journals') for 'forward
recovery' in case of massive failures.
The steps
followed to design an MIS system
1.
Systems
analysis, which includes information needs assessment, requirements analysis,
and requirements specification
2.
Systems
design, which includes synthesis of alternatives, cost-effectiveness analysis
of alternatives, specification of criteria for selecting a preferred
alternative, selection of a preferred alternative, top-level design, and
detailed design
3.
Systems
implementation, which includes forms development, specification of data
collection and entry procedures, development of editing and quality control
procedures, software coding and testing, development of training materials and
training, integration of the software components with other system components
(e.g., personnel, communications, data transfer and assembly, report preparation
and distribution, feedback), and system-level testing
4.
Systems
operation and support, which includes not only routine operating procedures but
also provision for on-going system financing and management, quality control,
software maintenance and updating, personnel training, and system maintenance
and improvement (including periodic review of system performance and diagnosis
and correction of problems)
Business
Process Reengineering (BPR)
Business Process Reengineering is
the analysis and design of workflows and processes within an organization. A
business process is a set of logically related tasks performed to achieve a
defined business outcome. Re-engineering is the basis for many recent
developments in management. The cross-functional team, for example, has become
popular because of the desire to re-engineer separate functional tasks into
complete cross-functional processes.[citation needed] Also, many recent
management information systems developments aim to integrate a wide number of
business functions like Enterprise resource planning, supply chain management,
knowledge management systems, groupware and collaborative systems, Human
Resource Management Systems and customer relationship management.
Business Process Reengineering is
also known as Business Process Redesign, Business Transformation, or Business
Process Change Management.
Business process reengineering
(BPR) began as a private sector technique to help organizations fundamentally
rethink how they do their work in order to dramatically improve customer
service, cut operational costs, and become world-class competitors. A key
stimulus for reengineering has been the continuing development and deployment
of sophisticated information systems and networks. Leading organizations are becoming
bolder in using this technology to support innovative business processes,
rather than refining current ways of doing work.
Reengineering guidance and
relationship of Mission and Work Processes to Information Technology.
Business Process Reengineering
(BPR) is basically the fundamental rethinking and radical re-design, made to an
organization’s existing resources. It is more than just business improvising.
It is an approach for redesigning
the way work is done to better support the organization's mission and reduce
costs. Reengineering starts with a high-level assessment of the organization's
mission, strategic goals, and customer needs. Basic questions are asked, such
as "Does our mission need to be redefined? Are our strategic goals aligned
with our mission? Who are our customers?" An organization may find that it
is operating on questionable assumptions, particularly in terms of the wants
and needs of its customers. Only after the organization rethinks what it should
be doing, does it go on to decide how best to do it.
Within the framework of this
basic assessment of mission and goals, reengineering focuses on the
organization's business processes—the steps and procedures that govern how
resources are used to create products and services that meet the needs of
particular customers or markets. As a structured ordering of work steps across
time and place, a business process can be decomposed into specific activities,
measured, modeled, and improved. It can also be completely redesigned or
eliminated altogether. Reengineering identifies, analyzes, and redesigns an
organization's core business processes with the aim of achieving dramatic
improvements in critical performance measures, such as cost, quality, service,
and speed.
Reengineering recognizes that an
organization's business processes are usually fragmented into sub processes and
tasks that are carried out by several specialized functional areas within the
organization. Often, no one is responsible for the overall performance of the
entire process. Reengineering maintains that optimizing the performance of sub processes
can result in some benefits, but cannot yield dramatic improvements if the
process itself is fundamentally inefficient and outmoded. For that reason,
reengineering focuses on redesigning the process as a whole in order to achieve
the greatest possible benefits to the organization and their customers. This
drive for realizing dramatic improvements by fundamentally rethinking how the
organization's work should be done distinguishes reengineering from process
improvement efforts that focus on functional or incremental improvement.
Role
of IT for BPR
By some estimates, over seventy
percent of today's companies are performing business process reengineering
(BPR). BPR realigns business processes along more strategic lines by examining
current processes and redesigning those processes to increase efficiency and
effectiveness. As more organizations launch BPR projects, one issue becomes
painstakingly clear. Radically altering business processes within highly
automated work environments typically requires modification to the information
systems that support those processes.
Information technology (IT)
organizations have had significant difficulty meeting the BPR challenge due to
the inherent complexities involved in "retooling" complex legacy
environments. In order to more effectively respond to BPR retooling demands, IT
must play a more active role throughout a BPR project. IT must:
1.
Increase
their level of participation in all areas of a BPR initiative;
2.
Provide
key information regarding automated processes to business analysts;
3.
Build
a transition strategy that meets short and long-term retooling requirements;
4.
Enforce
the integrity of redesigned business processes in the target system;
5.
Reuse
business rules and related components that remain constant in a target
application.
Factors driving a BPR project can
include improving customer service, streamlining processes to cut costs, or
addressing inefficiencies in other high impact areas. For example, customers
frustrated with having to speak to multiple individuals regarding an insurance
claim may switch to the competition. To address this problem, an insurance
provider determines that service functions must be consolidated to one point of
contact. The underlying systems that manage claims handling do not support
single point of contact processing. In this case, legacy systems have become a
barrier to the success of the BPR initiative.
Regardless of the motivating
factors, creating an implementable retooling plan to support a BPR project
remains a frustrating, yet essential, challenge to IT organizations. Retooling
strategies can include surround technology, off-shore rewrites, redevelopment
of impacted systems, webification and package acquisition. Surround strategies,
like the creation of graphical user front-ends or a data warehouse, provide
some near-term benefit, but tend to ignore fundamental problems with stovepipe
information architectures.
Off-shore projects, which involve
shipping a system overseas, typically focus on a technical rewrite of a system.
Redevelopment of impacted applications normally couples redesign of the
technical architecture with redefinition of system boundaries and addition of
new functionality. Linking business functions to the internet still requires
managing and synchronizing legacy data and applications. As for software
packages, organizations must invest adequate time to determine if a package
meets BPR requirements and the amount of customization required if it does not.
The applicability of each of these approaches must be analyzed on a
case-by-case basis.
To determine retooling
strategies, a relationship between IT and the business must be formalized early
on. This relationship, which supports BPR analysis and implementation, is
reciprocal because business and technical analysts must devise a continuous
feedback communication loop for projects to work. This is particularly critical
because current systems analysis helps articulate the as-is business model while
the redesigned business model dictates the impact BPR has on existing
information architectures. Once this reciprocal cycle is in place, IT can
determine exactly how to upgrade, redesign, or replace selected systems in
order to implement reengineered business processes. Figure one highlights key
steps in a retooling strategy.
BPR
Retooling Steps
1.
Define
strategy, select modeling methodology & establish BPR project plan
2.
Build
as-is business model for impacted business areas based on strategic vision
3.
Refine
/ expand as-is model for all processes supported by existing systems
4.
Integrate
as-is process definitions with current system functions & components
5.
Finalize
functional and technical architecture required to meet BPR objectives
6.
Select
retooling strategy to redevelop, surround, acquire, webify or off-load
applications
7.
Maintain
design integrity between business requirements and retooled applications
8.
Reuse
applicable components including rules, interfaces & data in target
application
9.
Validate
retooled application against initial simulation of redesigned processes
IT
Plays Key Role in Assessing Changing Business Requirements
Being able to determine
appropriate retooling strategies requires that individuals with knowledge of
the information architecture be on the BPR team. Early and ongoing involvement
of IT personnel benefits all aspects of the project. Initial IT involvement
focuses on discovery. If a process is to be fully understood, and portions of
that process are automated, analysis of underlying applications facilitates the
completion of the as-is business process model.
Existing
Architecture must be mapped to Business Model
One major IT requirement involves
mapping the existing information architecture to the business model to
accurately depict current processes. Historically, companies could separate
business operations and underlying technology. This distinction is no longer
possible. Both areas must be analyzed in an integrated fashion to understand
current business processes. This involves analyzing the systems, as well as
human interaction with those systems. This includes reviewing system functions,
user interfaces, operational procedures, and the often convoluted process of
data sharing.
The method used by IT analysts to
refine the as-is process model requires documenting current functions and
presenting the information in a way that can be integrated into the business
model. For example, if order processing is being reengineered, analysts must
review system functions that initiate, register, process, deliver, bill, and
collect payment for an order. In many legacy architectures, these functions are
scattered across numerous standalone or stovepipe systems.
IT-based analysis involves
building an inventory of all impacted systems and includes system name,
location, operating environment, owner, interfaces, and programs. This
inventory should be established at a subsystem level where applicable. The
information collected here can be tracked using spreadsheets or in an open
repository using a standard transition model. While a formal repository
representation of a transitional model is more robust, the spreadsheet is a
good communication vehicle for users.
The transition model can be
established using standard repository technology. Additionally some products
include a database that allows analysts to store information about a system. In
some of these products, meta-data can be depicted in an object model that can
then be used to model business processes. This integrated view, coupled with
the ability to layer objects, allows business analysts to view summary
information and IT analysts to view the physical detail.
Building a base of information
that defines an existing information architecture requires working with systems
analysts to identify which pieces of which systems support key business
functions. A function is "a group of business activities which together
completely support one aspect of furthering the mission of the
enterprise". Research can be done through interviews with system subject
matter experts or through interrogation of interfaces, including screen and
report headings. This process is called "reverse requirements
tracing" and is most effective when results are verified with subject
matter experts.
Legacy functions are mapped into
the transition model as they are identified. The research process focuses on
those systems that contain functions that support the processes defined in the
business model. Each function, as it is discovered, is linked to the business
process it supports. That function is then linked to the user interfaces that
implement or initiate it. Interfaces include on-line screens, batch reports,
and job control streams. User interfaces can be linked to the programs that
implement that function during the implementation phase - depending on the
retooling strategy.
Analysts continue to link
business processes to functions, and functions to physical components, until
all automated processes in the business model are mapped. The reason a business
process is linked to an extracted function as an interim step in the transition
model is due to legacy system limitations. Functions, normally scattered across
stovepipe systems, are extracted from complex legacy interfaces. Creating an
interim mapping is therefore easier than trying to relate legacy components
directly to process-driven business models.
Retooling
Strategy Must Consider Legacy Architecture Evolution
In any BPR project, the as-is
model should be used as a basis to redesign workflows. As processes are eliminated,
updated, added, and re-sequenced, links to legacy system functions are
maintained in the transition model. The modeling approach used can be flexible,
because the open transition model supports mapping of legacy functions to
object, user interface, or other types of models. Upon completion of the new
business model, work can begin on the retooling strategy. This requires an
assessment of the existing architecture and target requirements which results
in a detailed, retooling implementation plan.
The first step in the assessment
requires management to identify feasible retooling hypotheses. This includes
surround strategies, redevelopment, offshore options, internet options and / or
package acquisition. Several basic issues drive which hypotheses are
considered. An immediate requirement to address customer service consolidation
may necessitate surround technology, such as graphical front-ends or data
warehouse. An offshore rewrite could also serve as an interim strategy to
stabilize a weak system. Additionally, selected off-the-shelf packages may meet
retooling requirements. Finally, redeveloping impacted systems may be the ideal
long-term solution to meet BPR requirements.
Regardless of the approach, the
transition model can be used to support detailed analysis, design, and
implementation. The analysis needed to finalize the retooling plan includes
maintaining links between business models and detailed design models, further
decomposition of legacy functions, and mapping legacy functions to detailed
target models. The type and depth of analysis depends on the strategy being
examined. For example, requirements mapping to a package compares package
models with target and legacy design models to determine reuse, deactivation,
integration, and migration requirements.
Full-scale redevelopment,
depending on the target architecture, requires detailed extraction and reuse of
existing business rules. Extracted business rules must be mapped, at the
applicable level of detail, to target design models. Augmentation of target
models, and reuse of key business rules, can significantly streamline
redevelopment cycles and shorten implementation windows. Many of the existing
business rules can be reused under a target architecture. Organizations are
spending a lot of money trying to recreate business rules when they don’t have
to.
These concepts challenge
traditional BPR retooling strategies which include surrounding impacted systems
or undertaking a complete rewrite. The first approach ignores the fact that
underlying architectures remain segregated and convoluted. The second approach
rarely accommodates legacy migration requirements and tends to exceed delivery
windows and budget plans. Selecting a common sense approach, based on detailed
analysis of target and existing architectures, yields the best results.
Several redevelopment options can
be applied as a way to retool legacy architectures. Regardless of the approach,
systems should be phased in over time, while deactivating legacy components
along the way. One approach is to create a system to support the cross section
of the business being reengineered. In the order processing example, this means
mapping all business rules extracted from multiple standalone systems to a
target design model. Rule extraction is accomplished using a combination of
slicing and cross-system extraction tools.
Another approach, applying
similar techniques, performs multi-system integration on all relevant systems.
This approach retools the existing architecture on a broad scale. The focus is
on mapping legacy to target design models, rule reuse, redundant rule
consolidation, legacy deactivation, and transition management. Both approaches
require phased decomposition of existing systems into reusable business logic,
data access, and communication segments. Data store consolidation and redesign
is performed concurrently. The transition model plays an active and critical
role throughout the implementation process.
Internet utilization requires an
assessment of the role of legacy data and functionality. Leveraging the
Internet requires more than setting loose scores of para coders. Redevelopment
offers this broader view of the issue and the solution. Even package strategies
require a retooling component. Legacy systems must be inventoried, deactivated,
and integrated as part of package implementation. If the package requires
retooling, redevelopment techniques may be applied after implementation. Any of
these longer term approaches can be coupled with a parallel, interim surround
strategy to deliver value near-term.
While it is true that BPR
retooling efforts, a relatively new endeavor for IT, have stumbled in the past,
this no longer needs to be the case. Following a few basic guidelines, along
with education on the tools and techniques that support the process, allows
managers to evaluate more options and make more informed retooling decisions in
the long run. As IT moves up the retooling maturity curve, realistic interim
and long-term retooling options should become the norm.
Enterprise
Resource Planning (ERP)
Enterprise resource planning
(ERP) integrates internal and external management information across an entire
organization, embracing finance/accounting, manufacturing, sales and service,
etc. ERP systems automate this activity with an integrated software
application. Its purpose is to facilitate the flow of information between all
business functions inside the boundaries of the organization and manage the
connections to outside stakeholders.
ERP systems can run on a variety
of hardware and network configurations, typically employing a database to store
data.
ERP systems typically include the
following characteristics:
1.
An
integrated system that operates in (next to) real time, without relying on periodic
updates.
2.
A
common database that supports all applications.
3.
A
consistent look and feel throughout each module.
4.
Installation
of the system without elaborate application/data integration by the Information
Technology (IT) department
Importance
of ERP
There can be no doubt that ERP is
an important tool in our world of today. As more businesses begin to compete on
a global scale, it will become critical for them to streamline their operations
and processes. However, it is important to realize that ERP is not the cure to
all the problems a company will face. There are a number of pros and cons to
this technology, and those who understand this will be the most likely to
succeed.
One of the most powerful benefits
of ERP is that it successfully companies the many system architectures of a
company. Indeed, this is why the technology was originally introduced.
When business processes are
streamlined into a single cohesive unit, the company will operate at a higher
level. This will lead to a higher level of productivity, and this in turn will
lead to more profits. Another powerful advantage of ERP is greater levels of
information flow, along with a higher quality of information. Given the fact
that we are living in the Information Age, this is critically important.
Companies must be able to rapidly transfer information from one place to
another. When information is transferred quickly and efficiently, the company
or organization will be able to act on the data within a short period of time.
However, it is not simply enough
to transfer information quickly. The organization must be able to make sure the
data is high in quality. All of the information in the world is useless if it
is not high in quality. In addition to information flow and data quality, ERP
is also powerful because it allows a company to effectively manage its
inventory. When the products are manufactured, it will be done with a high
level of precision. Perhaps the most important thing about this technology is
that the costs will be decreased. When a company has to deal with large amounts
of paperwork, managing it can be costly. It is also expensive to integrate
various software tools that were not originally designed for each other.
Once the processes of a company
are integrated, the costs involved with maintenance and transfer of information
will be low. The money saved by the organization can be used to invest in new
products or marketing strategies. Enterprise Resource Planning is powerful
because it allows a company to become highly flexible. An organization that
uses this technology will be able to quickly adapt to changes that occur in the
market. Though it may require a great deal of corporate restructuring, the
benefits will pay off handsomely in the end. Flexibility is very important
today. If an organization is not flexible, it will be difficult for them to
stay competitive.
One of the most powerful
advantages to ERP is the implementation of software. Even though Y2K didn't
become the disaster that many people expected, it gave rise to the concept of
making sure software was properly implemented. In addition to dealing with
software issues, ERP can also help companies integrate their operations. At the
same time, it is important to realize that there are a number of challenges
involved with utilizing ERP. Perhaps one of the greatest of these challenges is
cost. Enterprise Resource Planning tools are outside the price range of many
organizations.
When ERP was first introduced,
the only companies that truly could afford it were Fortune 1000 companies. Even
then, there was the problem of getting workers to accept the new tool. A number
of companies would purchase complex ERP tools, only to find that it was not
successful because the end users failed to properly use it.
Another problem with ERP is the
implementation. Setting up this system can be complex and time consuming, and
the minimum implementation time for a large company is six months. Despite
this, there have been cases were it took 18 months to fully implement the
system. Some clients have also complained that ERP software is not flexible.
It is important to understand
that ERP tools must be customized to meet the needs of the company. In most
cases, it will not be useful when it first purchased. Each company has unique
needs, and ERP tools must be able to meet them. A number of companies run into
problems when they attempt to customize the software.
Enterprise
Management Systems
Enterprise Management Systems (EM
Systems) are Network Management Systems (NMSs) capable of managing devices,
independent of vendors and protocols, in Internet Protocol (IP)-based
enterprise networks. Element Management Systems (EMSs), on the other hand, are
NMSs that are designed to manage a particular device, often implemented by the
device manufacturer.
Hardware
Hardware
Acquisition
Your IT strategy should define
the generic type of hardware that your business needs, and a timetable for its
acquisition. Consider upgrade costs, warranties, what level of support you want
and the maintenance contract with your supplier. These will affect pricing and
ongoing costs.
You can acquire hardware in a
number of different ways - for example by buying, leasing or renting it. Many
small businesses buy their desktop and server hardware outright. Use our
interactive tool to find out which computer equipment you should buy for your
business.
You can buy hardware directly
from the original manufacturer, usually via their website or over the
telephone, or through a retail channel. The direct route can be cost-effective
for a small number of PCs at a time.
To get the best from this route
you must have a clearly defined specification for your needs. Choose a standard
offering that is close to, but better equipped, than your minimum requirements.
You can upgrade memory, for example, later on if you need to.
Another option is to hire a
consultant to help you refine your requirements, place your 'shopping list'
with several different suppliers and see what sort of deals you are offered.
You can also buy printers
directly from the manufacturer, but consider the likely running costs before
choosing a particular type.
Ink-jet printers are often priced
cheaply, but the ink cartridges are expensive and may have to be replaced
often. It may be better to purchase a more expensive laser-based printer and
share it between the staff using your network. The lower running costs will
quickly cover the additional capital cost if you use color a lot.
As well as PCs, servers and
printers, you will need network infrastructure - such as network switches and
all the wiring needed to connect equipment. You can obtain this type of
hardware from most of the major catalogue based IT vendors who carry a large
range of network equipment.
It is important to balance how
much you spend on infrastructure and how much on PCs. Make sure to include this
in your IT strategy and plan any network upgrades so that they are implemented
at the right time, not in a piecemeal fashion.
Hardware
Selection Criteria
1.
Hardware
must support current software as well as software planned for procurement over
the next planning interval [year, 18 months, three years]
2.
Hardware
must be compatible with existing or planned networks
3.
Hardware
must be upgradeable and expandable to meet the needs of the next planning
interval
4.
Hardware
warranties must be of an appropriate length
5.
Hardware
maintenance must be performed by [local/remote vendor, in-house personnel]
6.
Whenever
feasible, hardware standards will dictate procurement of like brands and
configurations to simplify installation and support
7.
Routine
assessments of installed infrastructure will feed an upgrade/replace decision
process
Programming
Computer programming (often
shortened to programming or coding) is the process of designing, writing,
testing, debugging / troubleshooting, and maintaining the source code of
computer programs. This source code is written in a programming language. The
purpose of programming is to create a program that exhibits a certain desired
behavior. The process of writing source code often requires expertise in many
different subjects, including knowledge of the application domain, specialized
algorithms and formal logic.
Types
of Programming Languages
1.
Procedural
Programming Languages
Procedural
programming specifies a list of operations that the program must complete to
reach the desired state. This one of the simpler programming paradigms, where a
program is represented much like a cookbook recipe. Each program has a starting
state, a list of operations to complete, and an ending point. This approach is
also known as imperative programming. Integral to the idea of procedural
programming is the concept of a procedure call.
Procedures,
also known as functions, subroutines, or methods, are small sections of code
that perform a particular function. A procedure is effectively a list of computations
to be carried out. Procedural programming can be compared to unstructured
programming, where all of the code resides in a single large block. By
splitting the programmatic tasks into small pieces, procedural programming
allows a section of code to be re-used in the program without making multiple
copies. It also makes it easier for programmers to understand and maintain
program structure.
Two
of the most popular procedural programming languages are FORTRAN and BASIC.
2.
Structured
Programming Languages
Structured
programming is a special type of procedural programming. It provides additional
tools to manage the problems that larger programs were creating. Structured
programming requires that programmers break program structure into small pieces
of code that are easily understood. It also frowns upon the use of global
variables and instead uses variables local to each subroutine. One of the well
known features of structural programming is that it does not allow the use of
the GOTO statement. It is often associated with a "top-down" approach
to design. The top-down approach begins with an initial overview of the system
that contains minimal details about the different parts. Subsequent design
iterations then add increasing detail to the components until the design is
complete.
The
most popular structured programming languages include C, Ada, and Pascal.
3.
Object-Oriented
Programming Languages
Object-oriented
programming is one the newest and most powerful paradigms. In object- oriented
programs, the designer specifies both the data structures and the types of
operations that can be applied to those data structures. This pairing of a
piece of data with the operations that can be performed on it is known as an
object. A program thus becomes a collection of cooperating objects, rather than
a list of instructions. Objects can store state information and interact with
other objects, but generally each object has a distinct, limited role.
There
are several key concepts in object-oriented programming (OOP). A class is a
template or prototype from which objects are created, so it describes a
collection of variables and methods (which is what functions are called in
OOP). These methods can be accessible to all other classes (public methods) or
can have restricted access (private methods). New classes can be derived from a
parent class. These derived classes inherit the attributes and behavior of the
parent (inheritance), but they can also be extended with new data structures
and methods.
The
list of available methods of an object represents all the possible interactions
it can have with external objects, which means that it is a concise
specification of what the object does. This makes OOP a flexible system,
because an object can be modified or extended with no changes to its external
interface. New classes can be added to a system that uses the interfaces of the
existing classes. Objects typically communicate with each other by message
passing. A message can send data to an object or request that it invoke a
method. The objects can both send and receive messages.
Another
key characteristic of OOP is encapsulation, which refers to how the
implementation details of a particular class are hidden from all objects
outside of the class. Programmers specify what information in an object can be
shared with other objects.
A
final attribute of object oriented programming languages is polymorphism.
Polymorphism means that objects of different types can receive the same message
and respond in different ways. The different objects need to have only the same
interface (that is, method definition). The calling object (the client) does
not need to know exactly what type of object it is calling, only that is has a
method of a specific name with defined arguments. Polymorphism is often applied
to derived classes, which replace the methods of the parent class with
different behaviors. Polymorphism and inheritance together make OOP flexible
and easy to extend.
Object-oriented
programming proponents claim several large advantages. They maintain that OOP
emphasizes modular code that is simple to develop and maintain. OOP is popular
in larger software projects, because objects or groups of objects can be
divided among teams and developed in parallel. It encourages careful up-front
design, which facilitates a disciplined development process. Object-oriented
programming seems to provide a more manageable foundation for larger software
projects.
The
most popular object-oriented programming languages include Java, Visual Basic,
C#, C++, and Python.
4.
Other Paradigms
Concurrent
programming provides for multiple computations running at once. This often
means support for multiple threads of program execution. For example, one
thread might monitor the mouse and keyboard for user input, although another
thread performs database accesses. Popular concurrent languages include Ada and
Java.
Functional
programming languages define subroutines and programs as mathematical
functions. These languages are most frequently used in various types of
mathematical problem solving. Popular functional languages include LISP and
Mathematics.
Event-driven
programming is a paradigm where the program flow is determined by user actions.
This is in contrast to batch programming, where flow is determined when the
program is written. Event-driven programming is used for interactive programs,
such as word processors and spreadsheets. The program usually has an event
loop, which repeatedly checks for interesting events, such as a key press or
mouse movement. Events cause the execution of trigger functions, which process
the events.
Two
final paradigms to discuss are compiled languages and interpreted languages.
Compiled languages use a program called a compiler to translate source code
into machine instructions, usually saved to a separate executable file.
Interpreted languages, in contrast, are programs that can be executed from
directly from source code by a special program called an interpreter. This
distinction refers to the implementation, rather than the language design
itself. Most software is compiled, and in theory, any language can be compiled.
LISP a well known interpreted language. Some popular implementations, such as
Java and C#, use just-in-time compilation. Source programs are compiled into
bytecode, which is an executable for an abstract virtual machine. At run time,
the bytecode is compiled into native machine code.
System
Software
System software is computer
software designed to operate the computer hardware and to provide maintain a
platform for running application software.
System software helps use the
operating system and computer system. It includes diagnostic tools, compilers,
servers, windowing systems, utilities, language translator, data communication
programs, database systems and more. The purpose of system software is to
insulate the applications programmer as much as possible from the complexity
and specific details of the particular computer being used, especially memory
and other hardware features, and such accessory devices as communications,
printers, readers, displays, keyboards, etc.
Application
Software
Application software, also known
as an application or an "app", is computer software designed to help
the user to perform singular or multiple related specific tasks. It helps to
solve problems in the real world. Examples include enterprise software,
accounting software, office suites, graphics software, and media players.
Application software is
contrasted with system software and middleware, which manage and integrate a
computer's capabilities, but typically do not directly apply them in the
performance of tasks that benefit the user. A simple, if imperfect, analogy in
the world of hardware would be the relationship of an electric light bulb (an
application) to an electric power generation plant (a system). The power plant
merely generates electricity, not itself of any real use until harnessed to an
application like the electric light that performs a service that benefits the
user.
Processing
of information
When we deal with information, we
do so in steps. One way to think of this is to picture the process of
acquiring, retaining, and using information as an activity called information
processing. Information comes from the outside world into the sensory registers
in the human brain. This input consists of things perceived by our senses. We
are not consciously aware of most of the things we perceive; we become aware of
them only if we consciously direct our attention to them. When we do focus our
attention on them, they are placed in our working memory.
Another name for our working
memory is short-term memory. Our working memory has a very limited capacity -
we can attend to only about seven items at a time. Therefore, we must take one
of the following actions with regard to each piece of information that comes
into this short-term storage area: (1) continuously rehearse it, so that it
stays there; (2) move it out of this area by shifting it to long-term memory;
or (3) move it out of this area by forgetting it.
Long-term memory, as its name
implies, stores information for a long time. The advantage of long-term memory
is that we do not have to constantly rehearse information to keep it in storage
there. In addition, there is no restrictive limit on the amount of information
we can store in long-term memory. If we move information to long-term memory,
it stays there for a long time - perhaps permanently! To make use of this
information in long term memory, we must move it back to our working memory, using
a process called retrieval.
It may be convenient to view
information processing as parallel to the way in which an executive manages a
business. Information comes into the business across the executive's desk -
mail, phone calls, personal interactions, problems, etc. (This is like
short-term memory.) Some of this information goes into the waste basket (like
being forgotten), and some of it is filed (like being stored in long-term
memory). In some cases, when new information arrives, the executive gets old
information from a file and integrates the new information with the old before
refiling it. (This is like retrieving information from long-term memory to
integrate it with new information then storing the new information in long-term
memory.) On other occasions the executive may dig out the information in several
old files and update the files in some fashion or integrate them in some way to
attack a complex problem. The business of human learning operates in much the
same manner.
Database
Management System (DBMS)
A Database Management System
(DBMS) is a set of computer programs that controls the creation, maintenance,
and the use of a database. It allows organizations to place control of database
development in the hands of database administrators (DBAs) and other
specialists. A DBMS is a system software package that helps the use of
integrated collection of data records and files known as databases. It allows
different user application programs to easily access the same database. DBMSs
may use any of a variety of database models, such as the network model or
relational model. In large systems, a DBMS allows users and other software to
store and retrieve data in a structured way. Instead of having to write
computer programs to extract information, user can ask simple questions in a
query language. It helps to specify the logical organization for a database and
access and use the information within a database. It provides facilities for
controlling data access, enforcing data integrity, managing concurrency, and
restoring the database from backups. A DBMS also provides the ability to
logically present database information to users.
Components
of DBMS
1.
DBMS
Engine accepts logical requests from various other DBMS subsystems, converts
them into physical equivalents, and actually accesses the database and data dictionary
as they exist on a storage device.
2.
Data
Definition Subsystem helps the user create and maintain the data dictionary and
define the structure of the files in a database.
3.
Data
Manipulation Subsystem helps the user to add, change, and delete information in
a database and query it for valuable information. Software tools within the
data manipulation subsystem are most often the primary interface between user
and the information contained in a database. It allows the user to specify its
logical information requirements.
4.
Application
Generation Subsystem contains facilities to help users develop
transaction-intensive applications. It usually requires that the user perform a
detailed series of tasks to process a transaction. It facilitates easy-to-use
data entry screens, programming languages, and interfaces.
5.
Data
Administration Subsystem helps users manage the overall database management,
query optimization, concurrency control, and change management.
Relational
Database Management System (RDBMS)
A relational database management
system (RDBMS) is a database management system (DBMS) that is based on the
relational model as introduced by E. F. Codd. Most popular commercial and open
source databases currently in use are based on the relational database model.
A short definition of an RDBMS
may be a DBMS in which data is stored in the form of tables and the
relationship among the data is also stored in the form of tables.
No comments:
Post a Comment