Computer analyses in sports betting

This is perfectly normal on the stock market: the use of computers to generate profits. Wouldn’t it be great to use computers for sports betting as well? What would it be like if the PC were to carry out analyses? Imagine if everything went automatically and the profits were made while you were asleep or if software at least indicated the perfect time to bet money on this or that sporting event. Apart from the fact that this is practical, the time factor naturally also plays a role: while a person looks at what’s going on every now and then, and then reacts, the computer is always ready and reacts in fractions of a second.

Computer Analysis: Illegal? Morally reprehensible?

Computer Analyses

Computer analyses for sports betting or online football betting tips: Odds are of course calculated automatically. Nobody sits there and calculates again after each bet to calculate odds. What would happen if he miscalculated? Not to be imagined. Artificial intelligence has long since arrived in the field of sports betting. Anyone who decides to use computer programs to get help does not act illegally, because that is not prohibited. The moral side must be judged by everyone themselves. Apart from where such software comes from and what such a thing costs, it is likely to be used.

Machine remains machine

It would certainly be worth its weight in gold to have good software for computer analyses. What means good is a matter of interpretation. But a computer is only as good as it is programmed. It cannot process information such as current facts – the illness of a football player, for example. Only if the betting strategy is only based on odds would an automatism be helpful. On the other hand, nothing is so complicated and time-consuming that such an enormous amount of money would be worthwhile. This is proven by the following strategies, which pay particular attention to the amount of odds:

Strategies based above all on odds

For people who prefer to play it safe, the following strategy is suitable: If you place a 1.1 odds bet once a day, you have increased your budget by a factor of 2.59 after ten days. From 10 euro become after

10 days = 25,90 Euro
20 days = 67,30 Euro
30 days = 174,50 Euro
40 days = 452,60 Euro
50 days = 1173,90 Euro
60 days = 3044,80 Euro
70 days = 7897,50 Euro
80 days = 20484 Euro

In the event of a loss, you will need 10 won bets to recoup the money. In order to reduce the risk, you combine the system with a Surebets strategy and halve the money after 20 days. If you then lose, you only fall back about ten days and the money can continue to work, for example (this time with the strong capital of 1.00 euros):

10 days = 2,59 Euro
20 days = 6,73 Euro
=> 3,36 Euro secured and 3,37 Euro will be played on
30 days = 8,72 Euro
40 days = 22,63 Euro
11,31 Euro secured and 11,32 Euro will be played on
50 days = 29,35 Euro
60 days = 76,12 Euro
=> 38,06 Euro secured with 38,06 Euro will be played again
70 days = 98,72 Euro
80 days = 256,05 Euro
128,02 Euro secured with 128,03 Euro will be played on
90 days = 332,06 Euro
100 days = 861,29 Euro

The big advantage: If you are at 4000 Euro, 2000 Euro can be paid out as profit and the remaining 2000 Euro double again within 10 days. In addition, this strategy can be used in almost all sports.

Betting against late goals

This strategy is good if you are looking for bets with a high hit rate, good odds and the shortest possible betting period. With this system it is assumed that in certain matches no goal is scored in the last five minutes of a game. First you should look at the character of the game: This means KO games and games with a clear favourite are ignored.

Then you look at the course of the game until the 85th minute. The following possibilities or assumptions exist:

1. favourite leads with one goal – no further goal to be expected
2. favourite leads with two goals or more – no more goals to expect
3. outsider leads – further goal is likely
4. draw – assessment whether both teams could be satisfied with a draw (not for title aspirants or relegation candidates) – no further goal
5. at least one sending-off – further goal is probable

Then take a look at the season statistics of the participating teams: How was a goal scored in the last five minutes and how often did the goalkeeper of a team have to grab behind him in the last minutes? If all the conditions are met, the goal should be set in the 85th minute.

Timing matters

Good timing is essential for this strategy. It’s also handy to know football data so you don’t have to look it up first. The right site you can find here.

 

SQL::Composer – mapping SQL from Perl and back

SQL::Composer is yet another SQL mapper. But unlike others it does something very useful. It allows you to not only build an SQL from a Perl structure, but when getting the data from database to map it back to the usable Perl structure. We used that for this horse racing betting guide.

Some times you don’t need an ORM, but you would like to have an SQL builder that hides escaping, allows you to specify joins and returns a usable Perl structure when getting data from the database. SQL::Composer does exactly that.

For several years I have been waiting for a module that would do this, but unfortunately all the modules I have tried lack the join support at least. And I use joins a lot. Also they are tightly coupled with other ORMs or require a lot of manual parsing and stuff. So I have written my own module and several people found it handy. So here it goes.

Let’s start from the most advanced example where the best part of SQL::Composer is shown. For example we have Review -> Book -> Author tables. And we want to fetch a Review with all the related information.

my $expr = SQL::Composer::Select->new(
    from    => 'review',
    columns => ['text'],
    join    => [
        {
            source  => 'book',
            columns => ['title'],
            on      => [id => {-col => 'review.book_id'}],
            join    => [
                {
                    source  => 'author',
                    columns => ['name'],
                    on      => [id => {-col => 'book.author_id'}]
                }
            ]
        }
    ],
    where => [id => 1]
);

Let’s generate SQL:

my $sql = $expr->to_sql;

# SELECT
#     `review`.`text`,
#     `book`.`title`,
#     `author`.`name`
# FROM `review`
# JOIN `book` ON `book`.`id` = `review`.`book_id`
# JOIN `author` ON `author`.`id` = `book`.`author_id`
# WHERE `review`.`id` = ?

And get the bind values:

my @bind = $expr->to_bind;

After getting an ARRAYREF from DBI we will get a correctly mapped HASHREF that can either be used as is or can be converted into an object very easily (simple and tiny Perl value objects without additional functionality).

my $sth = $dbh->prepare($sql);
my $rv  = $sth->execute(@bind);

my $rows = $sth->fetchall_arrayref;
my $row_object = $select->from_rows($rows)->[0];

my $objects = $expr->from_rows($rows);

If $rows is something like:

[['Good', 'Perl programming', 'YAPH']]);

then we will get:

# [
#     {
#         'book' => {
#             'title'  => 'Perl programming',
#             'author' => {
#                 'name' => 'YAPH'
#             }
#         },
#         'text' => 'Good'
#     }
# ];

This structure is intuitive and maps perfecly to hash based Perl objects. If you do not want joins be embedded into other joins, just don’t specify them as that:

my $expr = SQL::Composer::Select->new(
    from    => 'review',
    columns => ['text'],
    join    => [
        {
            source  => 'book',
            columns => ['title'],
            on      => [id => {-col => 'review.book_id'}],
        },
        {
            source  => 'author',
            columns => ['name'],
            on      => [id => {-col => 'book.author_id'}]
        }
    ],
    where => [id => 1]
);

And we will get:

# [
#     {
#         'book' => {
#             'title' => 'Perl programming',
#         },
#         'author' => {
#             'name' => 'YAPH'
#         }
#         'text' => 'Good'
#     }
# ];

Among joins and data mapping SQL::Composer supports all expected expressions you would find in other SQL builders.

[foo => 'bar']              => "`foo` = ?",   ['bar']
[foo => { '!= ' => 'bar' }] => "`foo` != ?",  ['bar']
[foo => {-col => 'bar'}]    => "`foo` = bar", []

Also it supports sometimes hard to build sql expressions like:

[created => {'>' => \['ADD_DATE(NOW(), INTERVAL ? SECOND', 10]}]

# "created >= ADD_DATE(NOW(), INTERVAL ? SECOND)", [10]

I have been using SQL::Composer in ObjectDB for quite a while, and it works perfectly well in production with real world data and problems (of course if you like me try to reduce logic in SQL queries). The main focus is on joins and related objects, since when you have a “good enough” normalization you have a lot of joins. But as I stated earlier some people may use SQL::Composer directly since it is very handly and removes a lot of boilerplate from your Perl code.

One can say that writing raw SQL is more readable. Yes, that is true for very complex queries. But most of the time you need some kind of automation, arguments injections and so on and you end up implementing some kind of an SQL builder yourself. Also it is easier to map data from the database when it is structurely presented rather than parsing SQL which is not very easy and not very portable.

App::chronos – automatically record your computer activities

Often you want to know how much time you spend on a random computer activity during your work/home day. There are lots of apps that allow you to record the time, unfortunately you have to manualy turn them on and off. It can be really frustraiting when you forget to do so. So I have written an app that does that automatically.

chronos listens for X11 window switches and records how much time you have spent on every application. It runs a set of filters that guess the type of the application and its name. Moreover if the application can answer more than a question “what type am I” and additionally can provide like a visited URL, a contact you’re chatting with and so on, then the filter can parse that information and add to the log.

Activity details

As previously said the filters can parse additional information. For example right now if the application is a Firefox or a Chromium browser than the currently visited URL is detected. This is done by parsing current sessions. In case of a Skype or Pidgin, for example, the current contact name is detected.

Output

chronos prints the events to the stdout, so the log can be easily saved to any file you like. The format is simple: a single line in JSON, with UNIX epoch timestamps. For example:

{
   "_end" : 1412750698,
   "_start" : 1412750693,
   "application" : "Chromium",
   "category" : "browser",
   "class" : "\"Chromium\", \"Chromium\"",
   "command" : "",
   "id" : "0x4a00048",
   "name" : "\"reddit: the front page of the internet - Chromium\"",
   "role" : "\"browser\"",
   "url" : "www.reddit.com"
}

The JSON part has several fields. role, class, name and command are the fields recorded from X11 and they are saved as is. The filter program could for example detect what kind of a command line I am running (this time vim) and what kind of a file I am working on.

Reporting

Reading and analyzing the log file isn’t very handy, that is where the report command steps in.

As you already know the event is a JSON object that has various fields. The report tool can search through those events, group the results and sort the results by the time spent on them.

Show top 10 visited URLs:

$ chronos report --fields 'url' --where '$category eq "browser"' --group_by 'url' log_file | head -n 10

00d 00:18:27 url=www.youtube.com
00d 00:05:29 url=github.com
00d 00:01:59 url=twitter.com
00d 00:01:25 url=code.google.com

Here I am showing only url field, searching for category named browser and group by url.

Using --where and --group_by various useful reports can be produced specific to your needs.

--where syntax

If you have noticed option --where has a Perl-like syntax. That is actually eval-ed into a Perl subroutine that is than run on every event. This way the where clause can be as profound as needed.

--from and --to

Timeout

To configure how often you want chronos to sleep before recording any activity use --timeout option.

Idle time

chronos also detects the idle time and stops recording the activity. Idle time is detected by running xprintidle and comparing it to the --idle_timeout option, which is 5 minutes by default. So if you don’t type anything or don’t move your mouse for 5 minutes the previous activity is considered as ended.

Flushing

Various bad things can happen during recording. This could be the power outage or accidental killing of the chronos process. In order to be more robust chronos periodically flushes the activity to the log file. This can be configured by --flush_timeout option. And you won’t loose the event recording when you’ve been working on it for several hours.

Contributing

Different people use different applications. I cannot write the filters for every application out there, so if you use chronos and want an application and its options to be parsed, just write a filter package, it’s as simple as:

package App::Chronos::Application::Skype;

use strict;
use warnings;

use base 'App::Chronos::Application::Base';

sub run {
    my $self = shift;
    my ($info) = @_;

    # It's not a Skype application
    return
      unless $info->{role} =~ m/ConversationsWindow/
      && $info->{class} =~ m/Skype/
      && $info->{name} =~ m/Skype/;

    # Yay, it's Skype, let's parse the contact name
    $info->{application} = 'Skype';
    $info->{category} = 'im';
    ($info->{contact}) = $info->{name} =~ m/^"(?:\[\d+\])?(.*?) - Skype/;

    return 1;
}

1;

Tips & tricks

I personally have a bash script that combines several reports:

#!/bin/sh

LOG_FILE=$1
LIMIT=10
COMMAND="perl -Ilib script/chronos"

echo 'Top categories:'
$COMMAND report --fields 'category' --group_by 'category' $LOG_FILE
echo
echo "Top $LIMIT talks:"
$COMMAND report --fields 'contact' --where '$category eq "im"' --group_by 'contact' $LOG_FILE | head -n $LIMIT
echo
echo "Top $LIMIT URLs:"
$COMMAND report --fields 'url' --where '$category eq "browser"' --group_by 'url' $LOG_FILE | head -n $LIMIT
echo
echo 'Idle time:'
$COMMAND report --where '$idle' $LOG_FILE

And then:

$ ./report.sh log_file | mail -s Activities vti

Mixins in Perl

If you want to use mixins in Perl you don’t have to install anything or play with symbol table yourself. It’s right there, in the core.

Mixins are basically not creatable classes, their roles is to embed methods into your class. It is seen as an alternative to multiple inheritance and is something like Roles–.

To embed methods you can use plain old simple Exporter!

package MyMixin
use parent 'Exporter';

our @EXPORT_OK = qw(log);

sub log {
    my $self = shift;

    # Yes, this works too!
    $self->some_internal_method;

    say @_;
}

package SomeClassElsewhere;

use MyMixin 'log';

sub do_stuff {
    my $self = shift;

    $self->log('it is here!');
}

sub some_internal_method {
    my $self = shift;
}

KISSing is nice!

Publishr — publish everywhere

For http://pragmaticperl.com I needed a tool to post the new issue announcement to several social networks. It ended up supporting Facebook, Twitter, LiveJournal, VK, Email, IRC, Jabber/XMPP, Skype and more.

Where to get it?

http://github.com/vti/publishr.

How to run

From the git repository:

perl -Ilib script/publishr --config publishr.json message.txt

Where message.txt looks like:

Status: This is the short title
Link: http://link-to-the-press-release
Image: /path/to/image.jpg
Tags: perl, pragmaticperl, journal

The
multiline
body

Of course every social network supports different kind of messages. This is handled by the so called channels. For example for Twitter publishr only uses Status, Link and Image.

The publishr.json configuration files looks like:

{
   "access" : [
      {
         "name" : "twitter access #1",
         "options" : {
            "access_token" : "",
            "access_token_secret" : "",
            "consumer_key" : "",
            "consumer_secret" : ""
         },
         "type" : "twitter"
      }
   ],
   "scenarios" : [
      {
         "access" : "twitter access #1",
         "name" : "post to pragmaticperl twitter",
         "options" : {}
      }
   ]
}

access is a list of channel credentials. You name them as you like, provide required options and use in scenarios. This is made so you can use the same access tokens for different scenarios, like posting to different Facebook groups etc.

In scenarios you can provide options with additional options, like an IRC channel etc.

Custom commands

Sometimes you will need to just run a custom cli program. For example this is how sending to Skype is done. In util directory you can find a skype-chat.py Python script which uses Skyp4Py library. In order to call that script you configure cmd scenario:

{
    "name":"skype",
    "access":"cmd",
    "options":{
        "env":{
            "PYTHONPATH":"/path/to/skype4py/"
        },
        "cmd":"./util/skype-chat.py 'Skype Chat' '%status% %link%'"
    }
}

Where %status% and %link% are replaced by values from the message.txt.

Running only specific scenarios or channels

Sometimes you would want to run just a specific scenario or a channel, there are options for this:

perl -Ilib script/publishr --config publishr.json \
    --scenario 'post to twitter' message.txt
perl -Ilib script/publishr --config publishr.json \
    --channel 'facebook' message.txt

Useful web application design patterns

I have collected the most useful web application design pattern that I use in web applications I code at work and also for my personal projects. They are presented in a way of a problem and a solution.

All of the examples are for PSGI/Plack based web applications. But I am sure they can be used in web frameworks as well.

Filter/Middleware oriented design

Many of the web application functionality can be separated into so called filter chains or middleware in terms of Plack. But it can be not just some light request/response altering, but rather a complete subsystem. Since middleware are already decoupled it’s easy to build simple and effective solutions.

Let’s take a look at the most common web application task: request dispatching. We want to take an HTTP-request and map it onto the Perl class, call it and then get a template rendered.

The naive solution might be to do that in a single place like most of the web frameworks do.

sub {
    my $env = shift;

    my $path_info = $env->{PATH_INFO};

    my $controller_class = $path_info

    return [404, [], ['Not found']] unless try_load_class $controller_class;

    my $res = $controller_class->new->run($env);

    return [200, [], $res];
}

It’s all good and nice until you want to use smth like routes here. Then you would want to add authorization, user roles, default templates etc.

You will have to move everything to controller thus making them fat and very hard to test, since you will have to mock the whole environment.

The better and simpler approach is to separate responsibilies of the request phrases into middleware. For example:

enable 'RequestDispatcher';
enable 'ActionDispatcher';
enable 'ViewDisplayer';

Where RequestDispatcher takes the HTTP-request and transforms it into understandable for web application object. For example you can use routes there. Then if the correct route is found ActionDispatcher creates controller and runs it. ViewDisplayer renders the appropriate template.

As a side effect you have a possibility to rendere a template without calling a controller, which is needed in many web applications.

Later on you decide you need an authorization. Just write another middleware:

enable 'RequestDispatcher';
enable 'User';
enable 'ACL';
enable 'ActionDispatcher';
enable 'ViewDisplayer';

Where User middleware loads a user from a cookie for example and ACL checks if a current user has a role for executing this controller.

Controllers are kept thin (there is no need to check user rights everywhere), every middleware follows Single Responsibility Principle and everything is configurable and extendible without effecting other parts of the system.

I die a little inside when I see this:

package MyController;
use base 'WebFramework::Controller';

sub action {
    my $self = shift;

    my $user = User->load($self->req->cookie('user_id'));
    if ($user->role eq 'admin') {
        ...
    }
    else {
        return $self->403_error;
    }
}

When using middleware you move all of this stuff into one place and clean up your controllers and tests for them.

Single method controllers

I have already written about this. But I have to mention it here too.

Many web frameworks create controllers that have several methods. For example, we have a controller called Post and then it has internal CRUD methods like create, update etc. While this may seem like a good idea but a better idea is to keep it simple and create a separate package for every action. This will also help to follow Single Responsibility Principle and make testing a lot easier.

Also this way you can use composition separately. It is easier to apply a Role/Mixin or inherit from a base controller if you have just one method to worry about.

Request scopes

In web applications most of the time you have objects with two life cycles. The first one has to be available all the time and you want to build it once. The second one has to be there just for a request.

When using Plack you can create application objects or as I call them services and env objects that are created during a request.

For example, we have a mail object that can send mail. We want to be available in controllers and don’t want to create and configure it every time. Thus the controller will have just two attributes: services and env.

package MyAppController;

sub new {
    my $class = shift;
    my (%params) = @_;

    my $self = {};
    bless $self, $class;

    $self->{env}      = $params{env};
    $self->{services} = $params{services};
}

sub service { shift->{services}->service(@_) }
sub req     { Plack::Request->new(shift->{env} }

Controller doesn’t have to be any more complex than that.

env is also used for passing objects between middleware, thus you can put here template variables and so on.

Service containers

In the previous chapter I mentioned services. How does one configure them for the whole application? You can write a container, a simple hash or anything holding the services that you pass to other components of the system. Sometimes a simple IoC container is a good choice. This way you have smth like:

package MyApp;
use base 'WebFramework';

sub startup {
    my $self = shift;

    my $services = $self->services;

    $services->register('mail' => MyMailObject->new);

    ...
}

sub to_psgi_app {
...
}

Pull templates

Pull templates or active templates are templates that can call model methods from the templates. This could sound like a crazy idea, but if you do it right it has enourmous benefits.

Say for example you have to know in every template if user is logged in or not. Do you really put user variable in every controller for the template to be happy? Or would you rather call from the template special method to know if the user is logged in?

These special methods are usually called helpers. And if you keep implementing helpers in such a way that they just have a read only access to whatever they have access to, you are going to be allright.

For example an active template can look like:

% if ($helpers->acl->is_user)
<span>Logged in</span>
% } else {
<span>Anonymous</span>
% }

And a helper can look like:

package Helpers::ACL;
use base 'WebFramework::Helper::Base';

sub is_user {
    my $self = shift;

    return $self->env->{'user'} ? 1 : 0;
}

Also when you don’t have a controller and just want to render a template, helpers are already there and registered in a separate middleware.

Implementation

If you liked these patterns, you might want to take a look at http://github.com/vti/tu and a real life web application example http://github.com/vti/threads.

Contracts in mop

While trying out mop I decided to implement contracts, or basically a simple type checking system.

The type checking itself is done via Type::Tiny. In mop it is implemented using method roles, which is very very handy. The code looks like:

use mop;
use mopx::contracts;

use Types::Standard -types;

class MyClass {
    has $!foo is expected(Int);

    method add ($add) is expected(Int), ensured(Int) {
        $!foo + $add
    }
}

So mopx::contracts exports expected and ensured. As you can see this works on attributes too. ensured is for checking the return values, thus adding even more robustness to our code.

In case you wonder what happens when the class is inherited, all the types are inherited too, there is no need to repeat them. For example, an abstract class (or interface) can look like:

class MyAbstractClass {
    method do_something is ensured(Int)
}

class MyClass extends MyAbstractClass {
}

And MyClass inherits all the checking.

It is possible to overwrite the type checking now, but I will modify it so you can only overwrite the types without breaking Liskov substitution principle. For example:

class MyAbstractClass {
    method do_something is expected(Int), ensured(Int)
}

class MyClass extends MyAbstractClass {
    method do_something is expected(Number), ensured(PositiveInt)
}

So we can only weaken preconditions and strengthen postconditions. But this is not yet implemented :) (UPDATE: implemented!)

If you’re interested you can follow the project at http://github.com/vti/mopx-contracts.

I want to thank guys who are making mop happen and for their help on the channel, especially doy, who helped with initial implementation.