User Tools

Site Tools


developer_manual

This is an old revision of the document!


Pacific EMIS Developer Guide

Introduction

The Pacific Education Management Information System (Pacific EMIS) is an open source application developed to improve the education system in Pacific nations. It is currently being developed and supported by the Secretariat of the Pacific Community (SPC) with support from the Asia Development Bank (ADB). This guide is for any developers with contributing to this project in one way or another. This guide documents high level development architectural design, development environment, development workflow, policies and procedures without going into lower level details of the source code and APIs which is documented currently directly within the source code itself.

Project Overview

The Pacific EMIS is a single page web application built with AngularJS and ASP.NET technologies. While it is important to know that this web application is complementary to a desktop application in some countries, built on the same database, that was first started in 2002. So there is a lot of functionality in there that hasn't been ported to the web version; we've concentrated so far on the features of most use to a broader user base. For example, lookup table maintenance is done (when needed) through the desktop because there hasn't been a lot to be gained by porting that. This developer guide is mostly gear towards the Web application only (which again, make use of the same database as the legacy desktop application). The web application project moving forward is likely to eventually obsolete the legacy MS Access desktop application completely.

Project Structure

The Pineapples solution (in Visual Studio parlance) is managed in a single git repository. The solution is organised into 3 main projects (which also translates into three folders in the root of the repository):

  • Pineapples ASP.NET project: This is the startup executable. It is responsible for delivering all objects to the browser, including the initial page, all required scripts, stylesheets, and other assets. Pineapples implements both WEB API 2 and MVC interfaces. The Web API 2 interface delivers data in JSON format to the client on request. The MVC interface delivers html to the client including the initial page and views inserted into that page.
  • Pineapples.Data .NET class library: This project exposes “business objects” to Pineapples, and is responsible for reading and writing data to the SQL server database. This project also includes any required JSON serializers used by Web API 2 to construct and tear down objects received / sent over the REST interface.
  • Pineapples.Client .NET Web Site. The client front-end (i.e. javascript and css files executed in the browser). As a web site, this project has no project file, and Visual Studio build is turned off. The project is managed using a “web stack” composed of npm, bower and grunt. Other Projects

Two other projects complete the solution:

  • Pineappples.Reports: This is an SSRS project, home to report definitions and related SSRS objects. These are deployed into the SSRS server. Only a couple of imcomplete samples are included here. JasperReports is instead the reporting platform used.
  • Pineapples.Data.Tests: A VS2015 Test project allowing tests of the data layer.
  • JaspersoftWorkspace: Where all the JasperReports definition is store.

Other folders found in the root of the git repository are:

  • assemblies: This is home to any additional assemblies that are not part of the project, not part of .NET, and not served by nuget. Placing these in a common location relative to the solution projects makes managing references easier. The assemblies folder sits besides the packages folder created by nuget.
  • database: This folder holds a SQLDelta schema file representing the latest schema of the Pineapples database. SQL Delta can compare this snapshot against a database and generate a script to resolve the differences. This was never really used and will be deprecated in favor of new approach
  • packages: This is where NuGet installs all the backend dependencies (aka. assemblies and references in VS)
  • SqlScript: The SQL schema evolution will now be committed here in the form of plain old SQL scripts and managed by Git.

Programming Language

The programming language on the backend (i.e. Pineapples, Pineapples.Data, Pineapples.Data.Test) is C# using ASP.NET4.6 framework. The frontend (i.e. Pineapples.Client) is made up of HTML, CSS, Javascript and TypeScript with the Angular 1 framework. All Javascript code is being migrated slowly to TypeScript as changes are done to it or new code is written.

Why not ASP.NET5 and Angular 2? The short answer, those two projects were not in stable release when this project started. Moving to any of those is not an upgrade but potentially a significant rewrite. New code is written in as to ease the transition down the line. This will be reviewed in mid 2017.

Things to consider regarding ASP.NET4.6 to ASP.NET5 migration path.

Issue Pineapples (ASP.NET4.6) ASP.NET5
Separation of client-side SPA from server logic. Separation into 3 projects. Pineapples.Client Web site is all the client material and npm/bower depencies. Managed entirely by grunt. Adds wwwroot and dependencies folders into project for this purpose.
Data Access Microsoft's EF6 is object-relational mapping framework used.
Dependency Injection Unity framework has adaptor for Web Api. These are used in Pineapples. Dependency injection built in to ASP.NET5
Confusion between Web API controllers and MVC controllers. Separated into two folders: Controllers and Controllers_Api. Use specific base custom classes. Merges Web API into MVC. Only one type of control

Similar considerations around Angular 1 vs 2. This is a rewrite, but tools are being introduced into 1 (see Angular 1.5) to ease this path. Adhering to best practices will make the transition smoother down the track:

  • Angular 2 is entirely written in Typescript, so should new Pineapples code.
  • Try not to use scope (store variables on controller not on scope). Refer to source code for example of how this works.
  • Move to Angular 1.5 components where possible rather than lower level directives.

Development Web Stack

No web application is built without a web framework (or library). As was mentioned previously the frameworks used are ASP.NET4.6 and Angular 1. In addition to the frameworks used for the development of the application itself a modern web stack is also used to manage various aspects of the frontend development (i.e. Pineapples.Client) such as running tests and building the frontend part of the application:

  • NodeJS is the platform used to run Javascript on the server. It is necessary to run the other tools (i.e. npm, grunt, bower). This is not part of the application once deployed but used on the developer's workstation only.
  • NPM will be installed with NodeJS above. It is the package manager and is used mainly to install grunt plugins and other tools used by the frontend such as testing framework. You can see the currently used npm packages in packages.json which you can modify to add new grunt tasks. Or use the command line > npm install grunt-plugin. Those packages are not part of the actual application in deployment but are used mainly on the developer's machine in Pineapples.Client\node_modules\.
  • Bower must be installed globally using npm. From the command line you can simply > npm install -g bower. All dependencies on 3rd party components *used in the browser* are managed using bower. Those components that do not support bower, are still added into bower.json with a reference to their source URL. The bower_components folder is not in source control, since its contents are entirely determined by bower.json. Modify bower.json to add new 3rd party libraries to the project or use the command line from within the Pineapples.Client folder (e.g. > bower install –save-dev angular-cool-module).
  • GruntJS is used to automate various frontend task such as testing, linting, building, etc. The gruntfile.js contains all the configuration for the project management tasks. These are accessed in VS2015 from the Task Runner Explorer window. The two most important tasks are “build” and “deploy” or “_build_deploy” to run one after the other. Careful review of the Gruntfile.js must be done to understand what it does as this will be constantly improving. Essentially, it gathers all files that will go into the browser (i.e. Javascript, CSS, images, fonts) and prepares them for being served by the ASP.NET application. It puts most of them in a temporary Pineapples.Client\dist\ folder that is cleaned and recreated on each build.

Database and Operating System

Only Microsoft SQL Server is supported. However, the application only requires the freely available MS SQL Server Express. This will likely not change as the application makes heavy use of SQL Procedures which are not easily portable to other database engines.

Currently the application is developed on windows and must be deployed on Windows; any recent version of windows, preferably though not necessarily Windows server.

Setting Up the Development Environment

The easiest way to get a development environment and start developing on the Pacific EMIS is on Windows with instructions below.

Main Development Tools

Most developers currently use Visual Studio, ASP.NET4.6 (comes with Visual Studio), MS SQL Server Express or Developer Edition, IIS, NodeJS/NPM and SourceTree/Git. Here are the steps assuming you have a working Windows operating system (version 7, 8, or 10):

  1. Download and install latest Visual Studio 2017.
  2. Download and install MS SQL Server Express.However, note that MS SQL Server Express does not include the SQL Profiler which is a invaluable tool for development, this feature only comes with the paid developer or higher version of the product unfortunately.
  3. Enable IIS feature
  4. Download and install NodeJS. While Visual Studio 2017 comes with its own NodeJS you should install the latest stable LTS version on the system (at the time of this writing this was NodeJS 6.11.2 LTS).
  5. Install Bower from the command line you can simply > npm install -g bower.
  6. Install Grunt from the command line you can simply > npm install -g grunt.
  7. You will also need to download and install SourceTree and Git.

Software Configuration Management

All software is managed through Git (Version Control Management) and currently Bitbucket (Issue tracking, collaboration, etc.) in a privately accessible repository. Its location is currently at https://bitbucket.org/softwords/pineapples. While it is open source you need to fork it to your own bitbucket account to clone it on your machine. See how to fork here. Then you can retrieve your own full local clone of the project with Git installed on your machine. Git can be used with the command line or using a more user friendly tool such as SourceTree.

In SourceTree

Create the folder C:\pineapples and follow this guide to use SourceTree to clone your local copy into C:\pineapples or some other location of your choice.

In short, your cloning in SourceTree could look something like the following. It is better to checkout develop as this is where the stable source code leaves. Once you get this working on develop then you can start creating branches for to develop new code.

Git command line

> cd C:\
> git clone https://ghachey@bitbucket.org/softwords/pineapples.git
The reason why examples above put the code in C:\pineapples is because before it was very important to place the pineapples clone into the C:\ folder directly due possible too deep folder structure. The NodeJs/NPM ecosystem largely addressed this problem on Windows after NPM version 3 therefore it is important to make sure VS uses the latest NodeJS/NPM installed binaries as will be emphasized below.

Visual Studio Setup

Start VS and click on Open - Project/Solution…. Locate the solution file at the root of the cloned repository downloaded on your local machine in the previous step. This will open the project into VS. This will automatically get VS to start downloading some of the frontend dependencies (Bower/npm). Give it a little time for this to complete it will take a little longer the first time around.

Before you attempt to build the solution there are two things that will help avoid annoying errors/warnings.

Update SQL Server Data Tools in Visual Studio 2015

Essentially, go through step 1) and 2) of here

Make sure Visual Studio uses your latest install NodeJS/NPM

If you already have it installed as instructed above you can now click Tools - Options and try to make sure that VS searches on the PATH of windows first so that the newly installed NodeJS/NPM take precedence other external web tools (e.g. bower, npm, grunt).

You can then verify if VS is actually using the node and npm binaries you think they were by opening Tools - NuGet Package Manager - Package Manager Console and typing the following commands and observing the output.

Databases Setup

You will need to setup two databases: the application's main database (e.g. MIEMIS, SIEMIS, SIEMIS-TEST, KIEMIS) and the Identify database (i.e. IdentitiesP). Those should be given to you by one of the developers with appropriate knowledge and authority until a sample database and created and available for use by all stored in database folder.

Restoring a database from a backup is documented here. Take note of the names of your new databases and also the name of the server (e.g. LOCALHOST, HOSTNAME\SQLEXPRESS) you will need those next.

In VS, open the file Web.config from the Pineaples project. Two places you need to configure. Locate the following line.

<add name="DefaultConnection" connectionString="Data Source=localhost;Initial Catalog=IdentitiesP;...

And replace localhost with your server name (e.g. MACHINENAME\SQLSERVER2012). Then locate the following lines.

<appSettings>
  <add key="server" value="localhost" />
  <add key="database" value="siemis" />
  <add key="context" value="siemis" />
  <add key="title" value="FedEMIS Online" />
  <add key="ReportServerUrl" value="http://localhost:8080/reportserver" />
</appSettings>

And replace localhost with your own server name (e.g. same as above MACHINENAME\SQLSERVER2012) and siemis with the name of your database (e.g. fedemis-test). The context is not the same as the database but would be the place app name (e.g. miemis, kemis, siemis, fedemis)

Building the Solution

First, as a one time thing, you need to set the starting project of the solution. Right click on the Pineapples project in the Solution Explorer and click on Set as StartUp project. The Pineapples project should from now on become bold.

At this point if you followed closely the previous sections you should already be able to right click on the Solution 'pineapples' and click Build Solution. The first time around this will require the downloading of all the NuGet packages (npm and bower dependencies would have already been pulled, at least on a good working Internet connection). It should build 4 projects without errors.

To then run the system you can click on the following. Note that below was first set to Google Chrome but it could be started within any browser.

Still getting problems? First thing to do if you run into any errors is to double check you have done all the steps in here correctly. Then search the issues database on the online repository as there is a chance the problem is a known one with a documented solution. You can try to make sure dependencies are all sorted as in the following section. Finally, get in touch with other developers of this project.

Verifying Dependencies

Make sure all projects have their dependencies sorted. Start with Pineapples.Data project. Ensure that the references to Softwords.Datatools, Softwords.Web and Softword.Redirector are present and point to the assemblies folder:

Ensure all references are present. Build Pineapples.Data, confirm it Succeeds. Move to Pineapples project. Check the references as above. Also check there is a reference to Pineapples.Data, and it points to the current project in the solution:

Check in particular that reference to Microsoft.Practices.Unity are resolved. These come from Nuget and should be available. Build the project Pineapples, then Build Solution. This should have the effect of running the gruntfile _build_deploy task, which is linked to the build of Pineapples.Client (which otherwise there is no need to build). You can of course run the gruntfile _build_deploy task on its own, for instance if you develop only on the client side (i.e. CSS, Javascript, TypeScript).

JasperSoft Studio Setup

Most of the reports are done in JasperReports, a cross-platform and database agnostic open source reporting platform similar to Crystal Reports and MS SQL Server Reporting Services. First thing to do it to download the latest version of JasperSoft Studio and install it on your computer. Currently the developers are using version 6.4 so it would be a good idea to run the same version to avoid compatibility issues.

Once installed on the computer the developer should familiarize with the comprehensive user documentation for JasperReports which is available at the same place. After successfully installing JasperSoft Studio and running some basic reports from the user documentation you should be ready to run the Pacific EMIS reports.

You can access all reports for each adopting country (e.g.FEDEMIS, MIEMIS, SIEMIS, KIEMIS etc.). They are managed just like all the software code into the pineapples Git repository you cloned in previous step Software Configuration Management. All the reports are committed into the JaspersoftWorkspace of the pineapples repository. This means you can simply change the default location of the JaspersoftWorkspace to reflect this directory. Let's say you clone pineapples into C:\ you would simply click on Jaspersoft Studio's File → Switch Workspace

This should automatically get you all the Pacific EMIS reports for all adopting countries right into your reporting development environment.

Next you will need to make sure you have a database driver capable of talking to the MS SQL Server. This can be downloaded here or simple search for “MS SQL Server JDBC Driver Download” if the link gets broken. Unzip this directory somewhere in your computer (e.g. C:\Program Files\) and locate a file called sqljdbc42.jar. This is the file you will need to add into your project's referenced libraries. You can do this by right clicking the project MyReports from your Project Explorer and click Properties. Add the Jar file as shown in the following illustration.

TODO

  • Create the datasource for one database (same for others)
  • Open an example report and run it
  • Discuss standards to follow (all reports should be mostly replicated across all adopting countries, custom styling per countries, etc.)

Development Work-flow

The Pacific EMIS project constantly strives to improve its development operations in order to produce software of higher quality at a more efficient rate. This part of the developer guide will undoubtably change and should be kept close at hand when developing on this project.

Frontend Development Tools

The application is tested using the Chrome web browser which has good developer tools out of the box. For more advanced debugging of the frontend the developers mostly use Fiddler. If you have not done so already install and start Fiddler. As a one time thing you can click on WinConfig to make sure Windows does not interfere with traffic capture.

To eliminate all the noise of using fiddler as a proxy you can filter to only show connections to specific hosts, in our case we will include the two default hosts that VS will run the development version of the web application using the IIS express web server. That is, hosts localhost:52206 for HTTP connections and localhost:44301 for HTTPS connections. Can can be done as shown in the following illustration.

Since we are developing an application that runs through HTTPS encrypted channel it is useful to decrypt this traffic in fiddler so it can actually be examined and used for troubleshooting and development. Click Fiddler's Tools - Options and the HTTPS tab as shown below.

This should you started with this very useful in analyzing all your web application traffic.

Contributing Code General Workflow

In general all contributions by third parties should be done through forking the public repository, creating a local branch to work on based on the issue that is assigned to you, creating a pull request back to the original repository.

Forking the Public Repository

To fork you must first be logged on your Bitbucket repository. Point the browser to the https://bitbucket.org/softwords/pineapples URL and the click on the + as shown below.

Then Fork this repository as below.

Then fork with the following settings.

You should see it now under your own username as shown underlined below. Make sure you have SourceTree installed and click clone in SourceTree.

You should checkout the branch develop as below.

You are now ready to start working on your own copy of the pineapples repository. More on forking with bitbucket can be found at https://confluence.atlassian.com/bitbucket/forking-a-repository-221449527.html.

Creating Branch on your New Local Repository

Our normal process is we have issues in the issue tracking system. Contributors are typically assigned issues as shown below.

In that example, Brian Lewis would create a branch called issue354 and work on that for that particular issue (e.g. a bug fix, a new feature, etc.)

Creating a Pull Request back to Original Repository

Before you can create a pull request back into original repository there are 2 things you need to prepare.

1) The original repository may have evolved. You would need to pull that inside your own stuff to make yourself in sync again. You are responsible for that.

2) Once in sync you need to consolidate your PR into a single commit with a commit message following our standard. Refer to the section below “Commit Message Format Rules”

The core developers can help with this preparation if you are not an experience Git user.

Once your new shinny work is ready to go back to the original pineapples repository you can create your pull request.

TODO Screenshot how to do this.

More on creating a pull request with bitbucket is available at https://confluence.atlassian.com/bitbucket/create-a-pull-request-to-merge-your-change-774243413.html

Database Schema Changes

Database schema migration are currently done more or less manually with the help of a proprietary tool called SQLDelta. Currently there is only the most current schema available and it is not easy to revert to older schemas from previous releases. So it is important to stay up-to-date. The database schema of the developer can be made up-to-date by syncing with the master reference database

FIXME - Include steps and a simple schema upgrade with screenshots

Note that when you make schema changes, you should run the SP “common.afterschemaupgrade” which recompiles all views and stored procs.

Testing Code

Currently not a lot of test is being done, but eventually all new branches doing into origin/develop with new features or bug fixes should contain respective units test at a minimum testing the new feature or the bug being fix.

FIXME develop this section further as we move forward. Here are some notes to get started:

  • karma test infrastructure is in place in the grunt file. unit test are added into the test folder. Ad-hoc test pages can be stored in testpages. these can be run in IIS Express, which is started with the grunt task iisexpress:server.
  • establish and start to enforce lint standards in the typescript and javascript code (and hopefully in c# somehow)
  • move to a test-driven methodology, and accrue regression test suites both for the C# code, and the angular code.

High Level Architecture

This whole section needs a good review and re-write.

To recap what was already introduced in Introduction here is what the repository of the Pacific EMIS looks like.

Data Retrieval Service

The architecture of the data retrieval service is shown below.

And here is a short discussion regarding the use of Entity Framework 6 vs. ADO.NET by the project main software architect, Brian Lewis.

Templates

The architecture of templates is shown below.

Theme (Look and Feel)

Currently a little messy. Started out with a paid theme based on bootstrap now a lot of it is out and just building with HTML and CSS “left overs” from the theme. The current idea on the road map would be to move to a full Angular Material based interface to provide a professional consistent UI.

The Navigation menu in the side bar is loaded when the user logs in.

It is part of the packet of data returned with the BearerToken in response to api/token.

Both the menu and the home_site are picked up from the server from a *MenuKey* associated to the user. For ASP.NET identity accounts, this is on AspNetUsers table. For domain accounts, this is set to the default ALL. The MenuKey points into the table Navigation in IndentitiesP (i.e. the ASP.NET Identity database).

There are 3 types of entries in the IdentitiesP's dbo.Navigation table

  • Those like ALL, which represent a menu that may be assigned to a user
  • Submenus, that are parents of a collection of function (e.g. schools)
  • End points that are a specific system function (e.g. schools.list)

And is a brief description of its columns

  • Icon: this is fontawesome coding for the icon to show on that menu.
  • Label: text for the menu item
  • State: for functions, this is the ui-router state to go to when that function is selected. Obviously, these need to be defined in the Javascript/Typescript code. Most features have a routine that will define these routes.
  • Children: for menu parents, a pipe-separated list of the IDs of its children. This can be recursive, but currently, these children are all end-points–entries that point to a state.
  • homestate: the starting state when the user logs in. When you navigate to the state site.home, you are redirected to this state.

Roles are sent to the client. For ASP.NET users, these come from the IdentitiesP database. For domain users, it is just currently a list of ActiveDirectory groups. A role can be associated to a ui-route on the client. Access to that route can be dependent on belonging to the Role. (Note that even if a route is not exposed through the menu, you can still navigate to it if you know its URL.) This is managed by the component *angular-permission*.

Also sent to the client are *Filters*. These are restrictions on what a user may select in the searcher pages. For example, a filter of *District=DC* will mean that the Province dropdowns will be locked to province *DC* (note that this is ultimately enforced on the server based on the bearer token.) Filters are set up in ASP.NET Identity as Claims:

Menu filter and roles are sent to the client at login, but are also encapsulated as Roles and Claims in the bearer token. Their use on the client is more a courtesy than a security feature: on the client, we don’t show the user things that they will not be able to do on the server, where the Claim is enforced based on the token.

FIXME We need to better integrate the Roles, Filters and menu that are retrieved for a user who authenticates against Active Directory. This is important in Solomon Islands for example, where we will need to distinguish between users logging in to the the government domain. Brian suggest that this will be some data-driven mapping between Active Directory groups, and some abstract ASP.NET Identity account representing that group. So at login, if you are identified as belonging to some specified AD group, you will get the MenuKey, Filters and Roles associated to that group.

Lookups and Configuration

The Pacific EMIS is mainly configured and adapted to different countries through lookups in the database. Along side this document is kept a spreadsheet called PineapplesLookups which details all the lookups. Those must be understood and customized for any new deployment.

Note on Report Parameters

While JasperServer is where the reports are stored and run they can be accessed through a RESTful API. The Pacific EMIS can automatically integrate all published reports in a JasperServer for a professional unified UI design. The way this works in Pacific online is that the Context you are running the report in (e.g. from an individual school) will supply report parameters with specific names if they are required. For eample, from the new Reports tab within a particular School it will try to push a parameter ‘SchoolNo’ containing the current school’s ID instead of the user having to manually enter the SchoolNo for which the report is sought.

If there are any parameters required that are not supplied, you’ll get a dialog to prompt for all parameters, with any that can be supplied already filled in. So on this basis, if your reports use other parameter names, they will still run in Pacific EMIS, but you will have to key in the parameters (just as you would in Jasper server); you won’t get them supplied.

The dialog displayed in Pacific EMIS Online will also respect any dropdown list associated with a Jasper Input Control attached to the report. So the aim is to have a collection of such input controls and queries set up in common in all our Jasper installations. In fact these will be the same queries and names as used in the Lookups within Pacific EMIS Online.

Technology Documentation

Various approaches and tools used by the developers of this project are documented here.

Object to Relational Data Mapping

Originally pineapples made us of Linq2SQL but has since moved to EF6. Key differences in moving to Entity Framework from Linq2Sql:

L2S use the custom dbml designer to build its model of the database. EF uses explicit, and simpler, classes:

  • context class - the overarching data “context” PineapplesEFContext.
  • entity classes - one class for each entity , correspond to database tables basically. These classes are now in Pineapples.Data.Models; e.g. Books.cs

The context class contains a collection property for each entity e.g. cxt.Books. While called 'POCO' classes (= 'plain old c# objects') they are not really that 'POCO' : they are decorated with lots of attributes used by EF when accessing this data. How to generate these classes since now we don't have the Linq2Sql designer any more?

We use this tool POCO-Generator

It generates everything we need it to do, its failing is that it can be quite slow on a big database (parsing the entire database, including all stored procs, on load). However, it has a command line mode which is probably all we'll ever need. While there is a Visual Studio addin that works with the templating infrastructure (TT) it was found that the simpler approach was to use the POCO tool directly and copy code to VS.

Once installed you can use options as shown in the following figure as starting point.

The equivalent command line would be this if you prefer:

Note that the “Binder” architecture is relatively the same as before with Linq2Sql; there are two implementations of ModelBinder: for L2S and for EF. All the Pineapples binders have been changed to be derived now from the EF model binder. Only change is there is now an Update method rather than SubmitChanges, which is very much as L2S term.

The Binders also make use of some custom Attributes that were added “ChangeTrackingAttribute” which identifies the create/edit user and datetime. The Binder updates all these as appropriate when inserting or updating a record.

To get all the pCreateUser, pEditDateTime, etc. fields correctly attributed, edit your POCO class to derive from ChangeTracked.

Kobo Framework Data Collection

This feature allows custom school inspections to be implemented as Kobo Toolbox forms, and loaded into SIEMIS. Of course, this facility can apply to any Pacific EMIS installation, and can work with Excel spreadsheets of conforming layout, regardless of whether or not those spreadsheets originated with Kobo.

Set Up

First step in Set up is to create in SIEMIS a School Inspection Type to represent the custom inspection type. This can be done either using the Admin function in desktop SIEMIS, or the new /tables maintenance functions in Pacific EMIS. Next, design the form in Kobo toolbox. There are just four requirements: - The form must have a column with Id inspectionType (case-sensitive). This should be a calculated field, and its value should be the new inspection type code set up in SIEMIS. - The form must have a column schoolNo . It doesn’t matter how this is implemented; for example, as a straight text box, as a dropdown list, or even as a calculation of some sort. The value in this column is mandatory and must be a valid SIEMIS School No. - The data columns should have simple, consistent names. Kobo will assign a column name by using the input prompt for that column, replacing spaces with underscores. You should remove these generated names and replace them with simpler ones. These column names becomes the field names you will manipulate in SIEMIS, so don’t get stuck with unwieldy ones. Note also that a column name must be a valid Xml Attribute identifier. Specifically it cannot begin with a number, and must contain only letters, digits, hyphens, underscores, and periods. - The form must have columns start and end recording the start and end date/time that the survey took place. These are always included in Kobo Toolbx forms, but if you are getting the Excel from another source, or creating it by hand, you will need to ensure these fields are present.

Loading

A form meeting this design can be loaded into SIEMIS using the kobo/upload page. First, download the form data from Kobo as an Excel file. IMPORTANT: The Kobo Excel file as downloaded has a slightly non-standard format, so open it in Excel and immediately save it to straighten this out. You may also review the downloaded Excel and make any corrections you think necessary. When you load the file into SIEMIS, the uploader writes a new school inspection record of the custom type for each school. The date of the inspection is taken from the Kobo start and end fields (present in every Kobo survey). The custom fields in your survey are stored as a single Xml “blob” on the inspection record.

Reporting

To report on the inspection type, create a new view in the SIEMIS database with the name pInspectionRead.<inspectiontype> i.e. for inspection type ENVER this view is pInspectionRead.ENVER The job of this view is to dig your custom fields out of the school inspection’s Xml. This sample shows how:

ALTER VIEW [pInspectionRead].[ENVER]
WITH VIEW_METADATA
AS
SELECT S.*
, S.extraData.value('(/row/@Index)[1]','int') rowIndex
 
, S.extraData.value('(/row/@M0)[1]','int') M0
, S.extraData.value('(/row/@F0)[1]','int') F0
, S.extraData.value('(/row/@C0)[1]','int') C0
 
, S.extraData.value('(/row/@M1)[1]','int') M1
, S.extraData.value('(/row/@F1)[1]','int') F1
, S.extraData.value('(/row/@C1)[1]','int') C1
 
, S.extraData.value('(/row/@M2)[1]','int') M2
, S.extraData.value('(/row/@F2)[1]','int') F2
, S.extraData.value('(/row/@C2)[1]','int') C2
 
, S.extraData.value('(/row/@M3)[1]','int') M3
, S.extraData.value('(/row/@F3)[1]','int') F3
, S.extraData.value('(/row/@C3)[1]','int') C3
 
, S.extraData.value('(/row/@M4)[1]','int') M4
, S.extraData.value('(/row/@F4)[1]','int') F4
, S.extraData.value('(/row/@C4)[1]','int') C4
 
, S.extraData.value('(/row/@M5)[1]','int') M5
, S.extraData.value('(/row/@F5)[1]','int') F5
, S.extraData.value('(/row/@C5)[1]','int') C5
 
, S.extraData.value('(/row/@M6)[1]','int') M6
, S.extraData.value('(/row/@F6)[1]','int') F6
, S.extraData.value('(/row/@C6)[1]','int') C6
 
, S.extraData.value('(/row/@M7)[1]','int') M7
, S.extraData.value('(/row/@F7)[1]','int') F7
, S.extraData.value('(/row/@C7)[1]','int') C7
 
, S.extraData.value('(/row/@M8)[1]','int') M8
, S.extraData.value('(/row/@F8)[1]','int') F8
, S.extraData.value('(/row/@C8)[1]','int') C8
 
, S.extraData.value('(/row/@M9)[1]','int') M9
, S.extraData.value('(/row/@F9)[1]','int') F9
, S.extraData.value('(/row/@C9)[1]','int') C9
 
, S.extraData.value('(/row/@M10)[1]','int') M10
, S.extraData.value('(/row/@F10)[1]','int') F10
, S.extraData.value('(/row/@C10)[1]','int') C10
 
, S.extraData.value('(/row/@M10)[1]','int') M11
, S.extraData.value('(/row/@F10)[1]','int') F11
, S.extraData.value('(/row/@C10)[1]','int') C11
 
, S.extraData.value('(/row/@M10)[1]','int') M12
, S.extraData.value('(/row/@F10)[1]','int') F12
, S.extraData.value('(/row/@C10)[1]','int') C12
 
, S.extraData.value('(/row/@M10)[1]','int') M13
, S.extraData.value('(/row/@F10)[1]','int') F13
, S.extraData.value('(/row/@C10)[1]','int') C13
 
, S.extraData.value('(/row/@M10)[1]','int') M14
, S.extraData.value('(/row/@F10)[1]','int') F14
, S.extraData.value('(/row/@C10)[1]','int') C14
 
, S.extraData.value('(/row/@Photo0_1)[1]','int') Photo0_1
, S.extraData.value('(/row/@Photo0_2)[1]','int') Photo0_2
, S.extraData.value('(/row/@Photo0_3)[1]','int') Photo0_3
 
, S.extraData.value('(/row/@Photo1_1)[1]','int') Photo1_1
, S.extraData.value('(/row/@Photo1_2)[1]','int') Photo1_2
, S.extraData.value('(/row/@Photo1_3)[1]','int') Photo1_3
 
, S.extraData.value('(/row/@Photo2_1)[1]','int') Photo2_1
, S.extraData.value('(/row/@Photo2_2)[1]','int') Photo2_2
, S.extraData.value('(/row/@Photo2_3)[1]','int') Photo2_3
 
, S.extraData.value('(/row/@Photo3_1)[1]','int') Photo3_1
, S.extraData.value('(/row/@Photo3_2)[1]','int') Photo3_2
, S.extraData.value('(/row/@Photo3_3)[1]','int') Photo3_3
 
, S.extraData.value('(/row/@Photo4_1)[1]','int') Photo4_1
, S.extraData.value('(/row/@Photo4_2)[1]','int') Photo4_2
, S.extraData.value('(/row/@Photo4_3)[1]','int') Photo4_3
 
, S.extraData.value('(/row/@Photo5_1)[1]','int') Photo5_1
, S.extraData.value('(/row/@Photo5_2)[1]','int') Photo5_2
, S.extraData.value('(/row/@Photo5_3)[1]','int') Photo5_3
 
, S.extraData.value('(/row/@Photo6_1)[1]','int') Photo6_1
, S.extraData.value('(/row/@Photo6_2)[1]','int') Photo6_2
, S.extraData.value('(/row/@Photo6_3)[1]','int') Photo6_3
 
, S.extraData.value('(/row/@Photo7_1)[1]','int') Photo7_1
, S.extraData.value('(/row/@Photo7_2)[1]','int') Photo7_2
, S.extraData.value('(/row/@Photo7_3)[1]','int') Photo7_3
 
, S.extraData.value('(/row/@Photo8_1)[1]','int') Photo8_1
, S.extraData.value('(/row/@Photo8_2)[1]','int') Photo8_2
, S.extraData.value('(/row/@Photo8_3)[1]','int') Photo8_3
 
, S.extraData.value('(/row/@Photo9_1)[1]','int') Photo9_1
, S.extraData.value('(/row/@Photo9_2)[1]','int') Photo9_2
, S.extraData.value('(/row/@Photo9_3)[1]','int') Photo9_3
 
, S.extraData.value('(/row/@Photo10_1)[1]','int') Photo10_1
, S.extraData.value('(/row/@Photo10_2)[1]','int') Photo10_2
, S.extraData.value('(/row/@Photo10_3)[1]','int') Photo10_3
 
, S.extraData.value('(/row/@Photo11_1)[1]','int') Photo11_1
, S.extraData.value('(/row/@Photo11_2)[1]','int') Photo11_2
, S.extraData.value('(/row/@Photo11_3)[1]','int') Photo11_3
 
, S.extraData.value('(/row/@Photo12_1)[1]','int') Photo12_1
, S.extraData.value('(/row/@Photo12_2)[1]','int') Photo12_2
, S.extraData.value('(/row/@Photo12_3)[1]','int') Photo12_3
 
, S.extraData.value('(/row/@Photo13_1)[1]','int') Photo13_1
, S.extraData.value('(/row/@Photo13_2)[1]','int') Photo13_2
, S.extraData.value('(/row/@Photo13_3)[1]','int') Photo13_3
 
, S.extraData.value('(/row/@Photo14_1)[1]','int') Photo14_1
, S.extraData.value('(/row/@Photo14_2)[1]','int') Photo14_2
, S.extraData.value('(/row/@Photo14_3)[1]','int') Photo14_3
 
FROM pInspectionRead.SchoolInspections S
WHERE S.InspTypeCode = 'ENVER'
GO

Here the M0, F0, C0 etc fields correspond to the custom field names created in the Kobo designer. extraData in this view represents the Xml blob; the value function extract the fields by name. The second part of the value function specifies the data type of the extracted value – use int for whole numbers, nvarchar(50) for text, float (or decima) for numbers that may contain fractions. These same names are used for the resulting columns in the output, as shown here:

This data could be used to source a Jasper report, a pivot table, or other reporting output. Further the stored procedure pinspectionRead.ReadInspection <inspectionID> will return the data for a single inspection – using the appropriate query for the inspection type (if an override exists).

This could be built upon in future development to allow custom user interface by inspection type within Pacific EMIS.

Reporting Framework

While the system can potentially support JasperReports, SQL Reporting Services and others the currently mostly used reporting platform is JasperReports from JasperSoft Community, a mature reporting platform including the engine, a visual designer and a reports server.

The JasperServer supports a comprehensive RESTful API for integration with other systems and this is what is used by the Pacific EMIS to include professional reports directly into the Pacific EMIS web UI.

Integration provides:

  • A single page listing all KEMIS reports in the Jasper server
  • A page listing that subset of reports relating to a particular entity (e.g. Schools, Teachers)
  • A tab on Entities showing reports relating to a single item (e.g. a school, a teacher)
  • Opportunities to embed reports at context-specific places in the user interface. In other words, including any report anywhere.

Jasper Technologies Installation

A developer wanting to work with this would need to:

Jasper Technologies Configuration

Assuming you have already cloned the software repository and installed Jasper as in previous step you are almost ready to get started. The first thing to do is to set your Jaspersoft Studio Workspace to point to the location where all the reports for this project are located. For example, the docs clone the software repository into the C:\ directly. If you did the same then it means all reports for this software project are located in C:\pineapples\JaspersoftWorkspace\ and you would change the workspace to look like the following. You can do this from main menu's File - Switch Workspace.

Second thing, since we are using the MS SQL Server database we will need to install the SQL Server JDBC driver. You can install this anywhere but you will need to know where since you will need to tell Jaspersoft Studio where it is. As shown in the illustration below click on Project Explorer, MyReports and Properties. Then Java Build Path, Libraries and Add External JARs. From where you browse and locate your sqljdbcXX.jar (where XX is the latest version you have installed). Add it and Apply.

Reports Structure

For things to work as designed and expected you will need to follow some simple convention for how you structure your reports. Basically, it should reflect what is shown in the following illustration. That is in MyReports we have a list of country context all in small caps (e.g. fedemis, miemis, siemis, kemis, etc.). Each of those countries have a list of EMIS modules (e.g. Exams, Schools, Teachers, Students, etc.). Those EMIS modules have reports relevant their their respective modules. For example, Schools, contain schools reports. Then each module can potentially have EMIS module instances (e.g. a School, a Teacher, etc.) Those contain reports for individual instances (e.g. a school card report, a teacher profile report). Other reports can for now be organized in any way but may need to be included manually in the UI as oppose to automatically listed in their respective locations. Images and Templates folder simply contain reusable resources like logo and “stylesheets”.

Running and Deploying Reports

TODO

Report Parameters

The EMIS will attempt to provide the parameters to the Jasper report according to the context from which the report is invoked. For example, when invoking a report from the School folder, displayed inside the School UI, it will pass the current value of schNo. The parameter accepting the school ID must be named SchoolNo. If this is the only mandatory parameter, the report will be generated without further UI prompt. If other mandatory parameters are defined but not set, a dialog will prompt for these. Similarly, for reports relating to an individual teacher, use the parameter TeacherID to accept the teachers identifier (tID the primary key of TeacherIdentity).

For reports relating to a set of schools, you may use the parameter names corresponding to those accepted by the School filter. You can find what those are in Pineapples.Data → Models → SchoolFilter.cs. You would find out how to name the parameters the same way with other modules (e.g. Teachers, School Accreditations). Example of most wanted parameters for schools could be:

  • SchoolNo
  • SchoolType
  • District
  • Island
  • ElectorateN
  • ElectorateL
  • Authority

In the Schools report page, these parameters will be supplied where possible from the School filter. In this case, the parameters dialog will always be displayed when invoking the report.

In the general reports page, listing all reports, the parameters dialog will always be displayed; no defaults will be provided by the EMIS.

InputControls and Queries

We are establishing InputControls and queries to provide dropdown lists for parameters associated to the main kemis code types. If connected to your report, you’ll get a dropdown list for the parameter in both the JasperServer UI and the EMIS UI (when you get a parameter dialog.)

TODO More details on this soon.

Integration with Pacific EMIS

To effectively integrate the JasperReport server with the Pacific EMIS you need to make sure the relevant part of the Web.Config is edited accordingly (i.e. jasperUrl, jasperUser, jasperPass). This would likely be different whether you are working in development or deploying to production.

  <appSettings>
    <add key="server" value="localhost" />
    <add key="database" value="fedemis-test" />
    <add key="context" value="fedemis" />
    <add key="title" value="FedEMIS Online" />
    <add key="ReportServerUrl" value="http://localhost:8080/reportserver" />
    <add key="jasperUrl" value="http://localhost:8082/jasperserver" /> <!-- Reports feature will not work unless JasperServer deployed correctly -->
    <add key="jasperUser" value="jasperadmin" />
    <add key="jasperPass" value="jasperadmin" />
    <add key="FileDb" value="c:\files\filedb" /> <!-- This must be set with folder created on filesystem or documents feature will not work -->
  </appSettings>

Dashboard Framework

General

The dashboard framework is built on the collaboration betwen objects of two types:

  • a Dashboard; and its
  • DashboardChild objects.

Each DashboardChild provides some rendering (chart, table, map … ) or some data. The Dashboard provides a set of services to these objects.

The Dashboard:

  1. provides data to the DashboardChild
  2. manages “filters” ie selections to apply to that data
  3. is a “broker” for communication between Child objects
  4. manages the disposition on the page of the Child objects.

There a couple of models of dashboard, based on the type of data that is made available to the children.

  • CrossfilterDashboard: the dashboard builds a crossfilter from data that is generally retrieved from a warehouse table. The Dashboard will create useful crossfilter dimensions and all this is accessible to the children.
  • PAFDashboard: the dashboard loads PAF data (i.e. VERMPAF.xml), and exposes the API for children to drill into this.
  • ModelDashboard: the dashboard loads an instance of a complex object (e.g. school, student, teacher), so children can work with its collections (e.g. surveys, enrolments)

The dashboard functionality is implemented through a combination of:

  • code: typescript class that can be extended to form new angularjs components;
  • master layouts: implemented via MVC (i.e. razor) master pages used by each dashboard and each dashboard child;
  • css: to manage dynamic layouts and animation.

Layout

The dashboard uses masonry.js to layout its children in a grid. The grid is based on units of 150px x 150px. Each child decides what size it wants in these units (e.g. 3×2; 5×4)

This is then specified in css like this :

class="dashboard-wrapper height3 width4"

Selected Child

The dashboard allows the user to 'select' a child on the page, and make that child “lift” and expand to fill the full page width. Children can test whether they are Selected, or Unselected (i.e. some other child is selected) and can respond accordingly. Note that most of the time all components are neither Selected nor Unselected.

Options

The dashboard manages an options class whose properties specify possible filters on the data. These properties are named selectedYear, selectedDistrict, selectedSchoolType etc. When an option is changed, the change is communicated by the dashboard to all children. The child can then adjust its display in response. It may filter its data, simply highlight a bar or slice of a chart or highlight a row in a table.

A child has access to the options object (via this.dashboard.options) and can set its properties directly. For example, this can be in response to clicking a bar in a chart, or clicking a row in a grid. For example:

  • the user clicks a bar in a chart
  • the click handler in the child component sets the option appropriately
this.dashboard.options.selectedDistrict = <district represented by the bar>
  • the dashboard detects the option change, and may action it if required
  • the dashboard communicates the option change to all children.
  • this fires the onOptionChange event in each child, which executes any reconfiguration

In a CrossfilterDashboard, the dashboard responds to an option change by appropriately applying (or removing) a filter from the relevant dimension. In this circumstance, the child built on the crossfilter has no work to do since the change in filter on the dimension is transparently applied to any crossfilter groups built on that dimension.

Communication

Each child holds a strongly-typed reference to its dashboard. This is implemented via the require option in the component definition class. (see earlier comment). Communication from the dashboard to the children is all handled by bindings, so there is no need for the child to set up any watchers.

The base classes respond to these bindings by firing particular events; so when developing a child component, you need only to derive from the base class, then override the event to respond to the binding change.

Bindings and their events are:

dashboardReady (onDashboardReady)

Signals that the dashboard has prepared data ready for consumption by the children. The child component should implement any setup in onDashboardReady.

The property dashboardReady in the Dashboard is passed to the client. This is just a number. The dashboard should increment this number whenever its data changes. This allows for the possibility of more interactive dashboards (i.e. on change of option, rather than filtering an existing pool of data, the dashboard could respond to the change of option by fetching new data, then incrementing dashboardReady to inform the clients of the change.)

optionChange (onOptionChange)

Receive an 'optionchange' object that has a property for each option changed, and old and new values (i.e. { selectedYear: { new:2017, old: 2016} })

This value is passed to the onOptionChange event, however, the child will most probably work directly with dashboard.options in response to this event.

selectedChild

The selected Child has changed. The child can check if it has become the selected child and can react accordingly. Note that while children get explicit notification of such changes, many responses can be implemented without any action in code. Mechanisms to facilitate this include:

  • Crossfilter: as noted, filters applied in the dashboard in response to an optionchange are transprent to the client
  • ng-class / css
  • a table can make use of utility functions in the base ChildDashboard class, and in the options class, css and the ng-class and ng-show directives to render changes in selected options (e.g. <tr ng-class=“{'selected', vm.dashboard.options.isSelectedYear(d.year)}”>)

Reporting

The dashboard child may have an associated report in the Jasper report tree. This can be hard-coded in the defaultReportPath property of the child, or else passed in via the binding reportPath (i.e. report-path). Default handling is:

  • when a report is set, an icon appears in the child's header.
  • Clicking this item runs the report
  • the child can implement setReportContext to set up report parameters; this is intended to be used to do any mappings between the current dashboard options and the report parameters.
  • As usual , any required parameters that are not supplied will be prompted for at runtime.

Low Level Documentation and API

The lower level documentation about software design, application programming interfaces, small gotchas and all other nitty-gritty details about the source code is written directly inside the source code. It would be nice to eventually be able to extract and export to hard copy formats such as HTML or PDF using a tool like Doxygen. Currently the place to access it is directly inside the source code as this documentation process is just started off.

Since many things are yet undocumented here are a few things to keep in mind.

  • there's no comprehensive documentation of the Web Api. In the code, look in the Api files on the client, and the WebApiConfig.cs class in App_Start in Pineapples, which is where most routes are defined. Again, Fiddler gives you a look at what is being fired off to the server as you perform each operation.
  • JSON is the data format used by the Web Api (except for the AtAGlance call, see next point). Parameters passed up to the server are converted into objects or variables sent to the method in the WebApi controller. This happens automatically, but there is ways to dig in to the metadata to control how this happens. Similarly, objects returned from the Controller get a default conversion to JSON to send back on the response; you can control what this JSON looks like if you need to. In many cases, what we send back is a “DataResult” object which is one or more named DataTables, together with metadata defining the Page read, page size, the total number of records, first / last record. Using Fiddler, you'll see both the request sent up and the response sent back.
  • The AtAGlance pages are built from a static Xml file containing a table of aggregated values. Particular values are dug out of this Xml using Jquery as a kind of client-side “xpath”. Some bulk operations in Pineapples pass blocks of XML representing a collection of records into the Sql Server. The Enrolment edit page does this. So the JSON returned from the client is transformed to XML then passed to the SQL server.

eSurvey Technology

Deprecated in favour of the data collection workbook in most cases.

The Pacific EMIS makes use of what we call “PDF eSurveys” or “eSurveys” for short. Essentially, the eSurveys are PDF form documents used to submit the surveys obsoleting the old paper based process. The eSurveys are built using a proprietary tool called CenoPDF, so a relatively not expensive license must be bought to create the surveys (or edit them in the future). CenoPDF is a Microsoft Word plugin that adds the ability to edit, convert and create fillable PDF form files with text box, check box, button, annotations, actions, javasript, etc. directly in Microsoft® Word, Microsoft® Publisher and Microsoft® Excel.

Creating/Editing the PDF eSurvey

The first thing to do for anybody creating/editing the eSurveys is to read the CenoPDF User Guide. Once familiar with CenoPDF one can startting creating a new eSurvey or editing an existing one to a new revision. When starting from scratch it is a good idea to get the source word document of Karibati which at the time of this writing is the most comprehensive eSurvey using this technology in the Pacific.

These word documents currently could be given from one of the Pacific EMIS team members but ideally eventually it should be included in the software repository and version control there as well. Due to the very large number of data cells to process it is not recomended to try and creating the whole survey into a single big word document, but one should break it down into several documents and then re-attached together into a single PDF once complete.

The process for creating new eSurveys for a given country is something like this:

  1. Go through a given eSurvey (i.e. Primary School Annual Survey) from Kiribati and decide what part of the eSurvey applies to you.
  2. Collect all the Kiribati broken down word documents for a given eSurvey (i.e. Primary School Annual Survey) and download them somewhere on your workstation
  3. Pick the relevant part that you need for your eSurvey.
  4. Edit what needs to be edited (e.g. some countries call education levels Class 1, Class 2, etc. others refer to them as Grade 1, Grade 2, etc.) Other plain text labels will need to be edited as well such as names of provinces
  5. Save your word document as another name (e.g. SIEMIS_PRI_TRIN.docx for SIEMIS primary schools survey part recording transfers in data)
  6. Generate the PDF for this word document using the CenoPDF export to PDF feature (not the MS Word export to PDF).
  7. Do this for all the word documents to get all the data grid needed in the survey effectively constructing the whole survey piece by piece.
  8. Combine all the generated PDF documents into the final PDF eSurvey and call it something like “SIEMIS_2016_PRI_v1.pdf” with the “_v1” as a simple means to make it clear what version of the eSurvey this is in case it changes in the future. The combine step is done using a command line tool by executing the following two commands:
> "C:\the-path\cenoPostProcess.exe" -d C:\the-path\Pineapples.eSurvey\siemis\PRI\out -c "SIEMIS_2016_PRI.pdf" -f "SIEMIS_PRI*.pdf" --nopause

> "C:\the-path\cenoPostProcess.exe" -d C:\the-path\Pineapples.eSurvey\siemis\PRI\out -f "SIEMIS_2016_PRI.pdf" --title "Solomon Islands Primary Survey 2016"  -j "C:\the-path\Pineapples.eSurvey\js"
There is a small issue making the final generated eSurvey missing, or more specifically hiding some labels. There is an easy work around. Open every CenoPDF generated document into Adobe Acrobat and save the document without making any changes. Accept to override the file. Run the final step 8. above after doing this only.

The process for editing would be much simpler.

  1. Locate the word document that needs to be edited or if you need to create a new document create one.
  2. Generated the PDF for the one you changed.
  3. Attach all the PDF document together into the new PDF eSurvey and increase its revision number (e.g. “SIEMIS_2016_PRI_v2.pdf”).

PDF eSurvey Manager

Currently the PDF eSurvey Manager is a standalone desktop application that must be installed on a machine to “manage” eSurveys, in other words, it aims at doing the following:

  • Read existing school surveys from a database (production or a test database)
  • Pre-populate surveys with some data before sending them to schools
  • Upload completed surveys into the database (again, production or a test database)

The PDF eSurvey Manager repository is available at FIXME INCLUDE LINK TO REPO. But you only need to download the executable unless you are a developer wanting to fix or improve the tool. Detailed step by step instructions on how to install, configure and use the PDF eSurvey Manager belong in the User Guide since they are the target audience.

PDF eSurvey Process

The PDF eSurvey compliments the new online web user interface. For schools with a more reliable Internet connection they can directly access the survey form online and submit directly in the web Pacific EMIS application. Other schools without access to a reliable Internet connection but with access to a laptop or large tablet running the freely available Adobe Acrobat Reader can use the PDF eSurvey. There is no need for a good Internet connection, the PDF eSurvey can be sent to the school just like the old paper based survey, on a USB disk for example, and returned to the data entry officers just as before.

A more detailed user-oriented process is documented in the User Guide, but in essence it goes like this:

  1. Start the eSurvey Manager
  2. Create a pre-populated survey for one or more schools. Pre-populated eSurveys contains some important information such as the school name and ID and some data for previous years as a guide for example.
  3. Distribute those pre-populated eSurveys to their respective schools on a USB stick or email if that have it.
  4. School fill up their surveys as before but directly on a computer instead of paper.
  5. Schools return the filled eSurveys on the same USB stick or email
  6. Data Entry Officers look through the eSurveys for any issues and make sure it is all correctly completed
  7. Start the eSurvey Manager and load the data from the PDF eSurvey into the database
  8. Verify all data was uploaded successfully by browsing to the survey on the Pacific EMIS online web application

Data Collection Workbook

This is the newest most comprehensive main data collection tool. It collects school's resources, student rosters, teaching/non-teaching staff, WASH data and can be extended to collect more.

TODO - Any information pertinent to developers

Set the data connection to the country's production database. Currently this is used only for schools in the SchoolsList sheet but can also be used to refresh “lookups” in the List sheet. From Excel top menu Data → Connections and then edit the properties (in definition tab.)

Naturally, the lookups in List sheet will need to be changed. Use the Name Manager from the Formula top menu in excel to make sure you adapt their new range since the number of lookups will often change.

workbook-list-name-manager.jpeg

Some of the data from the workbook may not always have a place in the SQL schema. In those cases the DB stored the XML parts. It could be queried like this.

SELECT TSV.schNo
, TSV.svyYear
, SS.ssSchType
, tchFirstName
, tchFamilyName
, tcheData.value('(/row/@First_Name)[1]', 'nvarchar(50)') xFirstName
, tcheData
, tchClass
, tchClassMax
, tchSector
, tcheData.value('(/row/@ECE)[1]', 'nvarchar(1)') G0
, tcheData.value('(/row/@Grade_1)[1]', 'nvarchar(1)') G1
, tcheData.value('(/row/@Grade_2)[1]', 'nvarchar(1)') G2
, tcheData.value('(/row/@Grade_3)[1]', 'nvarchar(1)') G3
, tcheData.value('(/row/@Grade_4)[1]', 'nvarchar(1)') G4
, tcheData.value('(/row/@Grade_5)[1]', 'nvarchar(1)') G5
, tcheData.value('(/row/@Grade_6)[1]', 'nvarchar(1)') G6
, tcheData.value('(/row/@Grade_7)[1]', 'nvarchar(1)') G7
, tcheData.value('(/row/@Grade_8)[1]', 'nvarchar(1)') G8
, tcheData.value('(/row/@Grade_9)[1]', 'nvarchar(1)') G9
, tcheData.value('(/row/@Grade_10)[1]', 'nvarchar(1)') G10
, tcheData.value('(/row/@Grade_11)[1]', 'nvarchar(1)') G11
, tcheData.value('(/row/@Grade_12)[1]', 'nvarchar(1)') G12 
FROM pTeacherRead.TeacherSurveyV TSV
       INNER JOIN SchoolSurvey SS
             ON TSV.ssID = SS.ssID
ORDER BY tchClass

We can effectively have values pass through from the workbook straight to the SQL server, and have them appear in views, without finding a home for those values in the relational storage.

Security Considerations

Security is an increasingly important aspect of any project and the Pacific EMIS is no exception. This section describes the security framework of the project.

Authentication

Currently in progress: User Roles Implementation

ASP.NET Identity includes a concept of “Roles”, which are in effect a specific type of “claim” in the claim-based security model. For users authenticated by Identity (ie those defined AspNetUsers in IdentitiesP) these roles are defined in the IdentitesP database. For users authenticated against the domain the Roles collected are theDomain groups that the user belongs to. Being Claims, the Roles are embedded into the Bearer token–so that, every request made to the server has access to the Role membership of the user.

Web Api lets you decorate methods with a Role attribute to determine which role(s) can access that method so you can implement server side security this way. On the client, we send the Role memberships to the client with the Bearer Token, so they are accessible in the client-side code. This is the package that comes down in response to api/token.

In particular, the module *angular-permission* gives a simple way to tie an authorization condition to a state in ui-router so you cannot navigate to that state if the condition is not met. So the intention is to set up conditions matched to Role membership, and assign these on the states on the client, just as we do to the corresponding functions on the server.

So I think we can use these structures to give us quite fine-grained control over authorization for various functions. What we need are:

  • abstract definitions of rights represented as Roles (e.g. can save a school, can read teacher info) and
  • a mechanism to map into such Roles from domain groups

Other Notes.

  • The ASP.NET Identity code was modified to support logins using an ASP.NET Identity user (i.e. defined in the identities database) or a domain login account – so your simplest way to get in will be to supply the user name and password you use to log in to Windows on your machine.

Authorization

The Pacific EMIS uses a granular permissions architecture that divides the application into topics, and assigns rights to the user in that topic, example:

  • school;
  • teacher;
  • establishment;
  • enrolment;
  • survey;
  • finance;
  • infrastructure; etc.

Rights are:

  • Read : may read topic data
  • ReadX: may read extended topic data
  • Write: may write basic topic data
  • WriteX: may write extended topic data
  • Admin: may administer data in this topic; for example manage lookup lists
  • Ops: may perform any specific data manipulation operations in this topic.

Some rights imply others (e.g. ReadX ⇒ Read, WriteX ⇒ ReadX, WriteX ⇒ Write)

In the Pineapples database each Topic/Role is represented as a Role (e.g. pEstablishmentRead). These roles are called Functional Roles. The user (logged in to the database using Windows Authentication) belongs to a role representing their user group (e.g. FinanceOffice) and these roles are called Organisational Roles. Each Organisational Roles is a member of a set of Functional Roles. These membership bestow database permissions on the logged in user.

However, the database connection from the web site is made in the name of the service account running the web site, so this model cannot work. Using impersonation limits scalability because database connections cannot be reused. So we need to enforce the user privileges in other ways.

At the server the user must be prevented from connecting to any API endpoint to which they do not have permission (e.g. cannot edit a school record if not assigned to pSchoolWrite.) On the client, the user interface should reflect operations that the user is not permitted to perform (e.g. the edit button in the School Item form is only available if the user is in pSchoolWrite) .

Note that client side permission are treated as a “courtesy”, to allow the user to understand their role, and not perform operations that will fail due to authorization restrcitions at the server, but all authorization is enforced on the server.

More on Permissions

FIXME - cleanup with all content above.

Pacific EMIS Online determines the permissions you have from your AD group memberships. It finds all your AD group memberships in AspNetUsers, and finds any Permission associated to that in AspNetUserClaims.

This was similar in the ancestor EMIS originally used in Solomon Islands from which the Pacific Online has its roots

The permission you see here encapsulates the permissions you get shown shown in the Profile (click on your user name). Each character in the Permission hash represents one topic in the list below. The value of that character is the particular permission(s) you have.

If you compare these 2 screen prints, you’ll observe that:

  • o = all permissions
  • 0 (zero) = no permission
  • 3 = read + readx
  • ? = read + readx + write + writex
  • etc.

In fact, this is not as strange as it may appear: treat the permissions as a byte-map: Read = 1 ReadX = 2…Admin = 32. Then add them up (e.g. read + readx + write + writeX = 15.) Add 48 to this (= 63). Convert to an asci character = “?”.

Going the other way is easy too. Ascii(“?”) = 63. Minus 48 = 15. 15 = 1 + 2 + 4 + 8 = Read + ReadX + Write + WriteX.

However, these permissions are AGGREGATED, that is Pacific EMIS combines the permissions form each AD group you belong to.

This is how this works in the ancestor EMIS originally used in Solomon Islands from which the Pacific Online has its roots. This is done by the stored proc getEffectivePermissions.

Implementing Authorization as Claims

These granular permissions will be determined at login time, and encoded as claims in the bearer token. When an API endpoint is called, authorization is determined by interrogating the claims in the current ClaimsIdentity. Further, the permissions information is sent to the client at login time, along with the token. This enables the client-side code to access this information when configuring the UI.

To keep the data small, the permissions will be hashed into a short string. Each character of the string will represent a topic. Since their are 6 possible rights for each topic, there are 64 possible values for the topic 0–63. Note that the dependencies between rights mean that in actuality, many of these combinations cannot be used. Adding 48 to this value produces a displayable character in the range 48–111. This is the character included in the string to represent that topic. In this way, the permission grid is rendered as an 11 character string.

Server-side Implementation

An attribute PineapplesPermission(topic, right) may be assigned to any API endpoint (i.e. API controller method) to enforce that permission on the endpoint.)

Client-side Implementation

Is based on angular-permission , creating a permission named p<topic><right>, for example:

<span permission permission-only="pSchoolWrite">Click to edit...</span>

Assigning Permissions

For AspIdentity authentications, permissions will be assigned directy into AspUserClaims table. The hashed string will be stored as the claim value.

For AD authentications, we will map the user to a “proxy” in AspNetUsers in identity. This proxy will be determined by the group memberships of the user. The permission claim will be associated to this proxy. For example, a user that is a member of AD group ScholarAdmins may be assigned to a proxy AspNetUser account of the same name. The account will have a permission claim corresponding to the rights assigned to this group.

When you log in using an AD account SIEMIS finds all the Active Directory groups you belong to. It looks in AspNetUsers to see if there is an entry for that AD group. If there is, it gets the MenuKey for that group – this is an entry in Navigation table. Now you may belong to multiple groups of course, and these may have different menus. It picks the one with the highest value (i.e. alphabetically first value) in the field MenuPriority on Navigation. For example, if you are a member of grpMEHRD_PCRU, it finds the 2 menus PCRU and ALL – PCRU has higher priority so that what it uses, and you get the ‘Admin’ item on the bottom. This menu is set up when you log in.

There are three types of entry in the Navigation table: menu IDs, headings, and actions. The menu IDs are items that define entire menus. These value are assigned to users and groups in the MenuKey field in AspNetUsers table.

All the structure described here is custom to Pineapples Online (aka SIEMIS Online, MIEMIS Online, etc.), build on top of the standard ASP.NET Identity from Microsoft.

see ALL and PCRU records from figure below.

For these items: you define

  • Children – this is a set of other entries in this table, separated by |
  • Homestate – the page to go to when you first log in. (this is a “state” in the Angularjs client-side code)
  • MenuPriority – priority for the menu, to choose between multiple menus that a user may have.

Headers represent headings and subheadings in the menu tree. They define:

  • Icon – fontawesome classes for the icon to display beside this menu item
  • Label – label to display in the menu
  • Children – list of submenus or actions under this heading

Finally there are actions; these define a function to invoke the application:

  • Icon – fontawesome classes for the icon to display beside this menu item
  • Label – label to display in the menu
  • State – the angularjs state to go to when this item is selected.

This is the menu PCRU in action:

If you follow the chain down from the PCRU menu, through its children and their children, you’ll see how this menu is put together.

This can go as many levels as you want!

Technically, this tree is traversed and turned back into a hierarchy by the procedure getNavigationXml, which produces this:

SSL/TLS Encryption

The application support encrypted communication (i.e. HTTPS). In development the embedded web servers already serves pages through HTTPS. In production it would have to be configured to be served on HTTPS with redirect from HTTP to HTTPS to encsure this. Details to achieve this will be documented in the Systems Administrator Guide.

Latest Top 10 Security Risks

The project should always be measured against OWASP's most up-to-date Top 10 Security Risks. It is important to re-assess the project towards this top 10 list on a regular basic (e.g. every year or whenever a new release comes out). Any change should be carefully examine to make sure the application still covers all of them with details reflected in this documentation.

Integrated Penetration Testing

The above guidelines and procedures should offer an excellent starting point to ensure a secure web application. Of course, securing a web application should not stop here. We would like to see a more integrated penetration testing process. There are a number of tools that can be used to help support this process. Most of those tools have a relatelively steep learning curve but they are worth the time investment.

After some time evaluating several free software tools that were either recommended by OWASP or considered promising projects we have come up with a short list of tools to watch:

One or more of those tools could eventually be integrated into the development process. At first only making use of simple features such as automated scans and slowly integrating more complicated robust testing processes one by one. As these new processes come to live they should be clearly documented here with instructions on how to use the tools.

Project Documentation

Higher level documentation (i.e. User Manual, Systems Administrator Manual and this Developer Manual) is prepared using the collaborative tool Dokuwiki. To contribute you simply need to create an account and have a project manager assign you the rights to edit.

Functions and Data

A whole lots of things useful to know when developing on this Pacific EMIS. What will be documented here will be similar to low level documentation but not quite that. Like miscellaneous low level stuff [argh! cringing on my own words].

Data Warehouse

The database has numerous tables, view, procedures, etc. It would be daunting to get good data out of a highly normalised schema. For this we have the data warehouse layer. The data is exposed to developers, data managers and end-users in a more accessible format through this data warehouse layer. All tables and views with the schema warehouse (i.e. starting with warehouse.*) form this data warehouse layer.

They should always be used when possible instead of building large queries from scratch to access data. This will ensure a consistent view of the data across various presentation layers (e.g. power point presentations, Jasper reports, online dashboards, website, etc.) and also eventually when the data collection processes are improved and stable a consistent view of the data over time.

Survey Year Filter

Implemented through a new angular filter, surveyYear. This relies on a sysParam, SURVEY_YEAR_FORMAT which is a template for the format. The template may contain these tokens:

* yyyy : 4 digit survey year * yy : 2 digit survey year * nnnn : 4 digit next year * nn : 2 digit next year

e.g. token SYyyyy-nn for value 2015 produces SY2015-16

This is incorporated into default school SurveyList.cshtml to get started but everywhere a school year is used this filter should be used to display the full school year.

For reporting it may be useful to get this conversion done in the server, in the recordset presented to the report. For this you can use the Sql UDF common.surveyYearFormat like this.

SELECT common.surveyYearFormat(svyYear) surveyYear
FROM SchoolSurvey
developer_manual.1537926089.txt.gz · Last modified: 2021/02/02 02:10 (external edit)