MISS_HIT includes a simple code metric
      tool (mh_metric). It computes code
      metrics and complains if the metrics exceed acceptable levels.
    
    Configuration
Using MISS_HIT Metric
        This tool works exactly the same as the style checker, and
        re-uses the same configuration
        files and mechanism.
      
      
        To compute metrics on a set of files:
        
      $ mh_metric my_file.m my_model.slx
        MH Metric fully supports MATLAB code embedded inside Simulink
        models. (Unlike MH Style, there is no extra flag needed.)
      
      
        To compute metrics for all files in a directory tree:
        
      $ mh_metric src/
        To produce metrics in the current directory tree:
        
      $ mh_metric
        Metrics are reported on standard out. You can also produce a
        JSON report, HTML report or write the metrics to a text-file:
      $ mh_metrics src --text=metrics.txt $ mh_metrics src --html=metrics.html $ mh_metrics src --json=metrics.jsonIn a future release more formats (e.g. csv, etc.) will be supported.
        Inside a CI environment, this produces too much
        output. Instead you can use the ci option for this:
        
    $ mh_metrics src --ciThis mode will not produce an overall report, instead only report violations.
Enabling and disabling metrics
        To turn off limit checking for a specific metric (the default
        for all metrics), you can use the report directive in the
        config files:
        
      metric "npath": report
        To completely disable a metric (it will not be measured, nor
        turn up in the final report), you can use the disable
        directive:
        
      metric "npath": disable
        To completely disable or enable all metrics (useful for large
        projects using complicated hierarchical configurations), you
        can use the wildcard directive with report and disable:
        
    metric *: disable
metric *: report
Configuring and enforcing limits
        As indicated above, by default metrics are just
        reported. However you canb also enforce limits on any metric.
        For example, to limit the number of paths for each function,
        you add this this to your configuration file:
        
      metric "npath": limit 5
        As with other configuration, these limits propagate to
        sub-directories. You can change these limits on a more local
        bases (just like with style rules).
      
      
        In the description for each metric below, the name of the
        metric to use in config files is included in parenthesis.
      
    Justifications
Pragmas
        Metrics can be justified by placing a justification pragma at
        the level of scope where the violation occurs. Please refer
        to MISS_HIT Pragmas for a full
        description of all pragmas understood by MISS_HIT.
      
      
        For example:
        
      %| pragma Justify (metric, "npath", "can't be refactored");
        Longer justifications can be broken up into several lines:
      %| pragma Justify (metric, "npath", %| "this cannot be refactored " + %| "and I am going to tell you " + %| "why not...");
        Justifications that are useless generate a warning.
      
      
      Integration with issue-tracking systems
        A special configuration directive regex_tickets can
        be placed in the MISS_HIT configuration file. When set, this
        allows MISS_HIT to extract ticket identifiers from
        justifications, which in turn appear in a special section in
        the report.
      
      
	For example, this is how you can integrate with JIRA:
	
      regex_tickets: "\b[A-Z]{3,}-[0-9]+\b"
	Or GitHub issues:
	regex_tickets: "\B#[0-9]+\b"(Other regular expressions are left as an exercise to the reader.)
	You can mention tickets freely in justification text, for
	example:
      %| pragma Justify (metric, "cyc", %| "to be fixed in POTATO-666 or KITTEN-42");
	These then appear in the text or HTML report. This is helpful
	if you want to produce a report that mentions when things are
	going to be fixed.
      
      
        The command-line
        option --ignore-justifications-with-tickets can be
        used to ignore any justifications that mention a ticket.
      
    Metrics
File metrics
        These metrics are computed for each file.
      
      Lines ("file_length")
        The number of lines. Should be the equivalent to running the
        standard UNIX tool wc -l. This means
        comments and blank lines are counted.
      
      
      Function metrics
        These metrics are computed for each function, nested function,
        and method.
      
      McCabe Cyclomatic Complexity ("cyc")
        This measures
        the McCabe
        cyclomatic complexity. We have aimed for mlint
        compatibility, instead of doing it "right". Specifically this
        means:
        
      - Empty branches "count", even though they should not contribute, based on the original definition of the metric.
- Exceptions are generally ignored. Specifically a try-catch block is treated like on flat block + one branch. This is not correct since every statement in a try block may create a jump.
Lines in function ("function_length")
        The number of lines for each function. Note that adding
        together all lines of all functions will likely not add up to
        file_length due to blank lines and comments between functions.
      
      Path count ("npath")
        This approximates the number of paths through a function. This
        metric is based
        on NPATH
        and should be similar what other popular metric tools compute.
      
      
        Note that this number can grow very large, very quickly,
        especially if you have a lot of sequential if blocks. Further
        note that this metric is neither an under-approximate or
        over-approximate, but a reasonable compromise.
      
      
        Since the MATLAB language supports raising and catching
        exceptions (including exceptions further down the call tree) a
        safe over-approximate cannot be reasonably computed since
        every single line may raise multiple exceptions.
      
      Maximum nesting of control structures ("cnest")
        This measures the maximum level of nesting of
        control. Statements that are considered control statements
        are:
        
      - If statement
- Switch statement
- For loops
- While loops
- Exception handlers
Number of function parameters ("parameters")
        This counts the number of parameters for each function. Both
        inputs and outputs are considered.
      
      Number of direct globals ("globals")
        This counts the number direct non-transitive globals for each
        function. In other words, it counts how many distinct things
        are mentioned by all global statements in a function.
      
      
        For example this function has exactly one direct global:
      
function result = f1()
    global x
    result = x;
end
      
        This function also has exactly one direct global. There
        is another global dependency via the call to f1, but it is
        hidden. The metric to measure all globals (direct and hidden)
        will be implemented once we have basic flow analysis working.
      
function result = f2()
    global y
    result = f1() + y;
end
      Number of persistent variables ("persistent")
        This counts the number of persistent variables for each
        function. In other words, it counts how many distinct things
        are mentioned by all persistent statements in a function.
      
      
        Persistent variables make testing extremely difficult in
        MATLAB, more so than globals; so it is a really good idea to
        not have too many of them.
      
    JSON Report Schema
Purpose
	The purpose of the JSON report (unlike the text or HTML
	report) is to be easily machine-parseable. It is not intended
	for human consumption. The report schema is guaranteed to be
	stable between minor releases: i.e. function processing a
	report from a.b.* should also be able to process a
	report from a.c.* as long as c > b.
      
    Schema
Top-level
	The entire report is a single JSON object containing two
	members:
	
      - metrics
- worst_case
metrics
	The metric member is a JSON object one member per file. The
	value of each member is another JSON object with the following
	members:
	
      - file_metrics
- function_metrics
file_metrics
	A file_metrics member is a JSON object containing members for
	each file metric (e.g. "file_length"). Each value is a
	metrics_result object described below.
      
      function_metrics
	A function_metrics member is a JSON object containing members
	for each function, which in turn are JSON objects containing
	members for each function metric (e.g. "cyc"). Each value is a
	metrics_result object described below.
      
      metrics_result
	A metrics_result is a JSON object containing the following
	members:
	
      - 
	    status - will be one of the following strings:
	    - "measured only" - this metric was measured, but no limit was enforced
- "checked: ok" - this metric was measured, and is equal to or below the configured limit
- "checked: justified" - this metric was measured, and was above the configured limit; but a justification was supplied
- "checked: fail" - this metric was measured, and was above the configured limit
 
- measure - an integer indicating the measured value of the metric
- limit - (only for checked metrics) - an integer indicating the configured limit
- justification - (only for checked: justified metrics) - a string containing the justification supplied
worst_case
	The worst case table is JSON object where each member
	described a metric (function or file). The value of each is a
	list with augmented metric_result objects.
      
      augmented metric_result
	As metric_result (see above), but with on or two additional
	members:
	
    - file - a string indicating the file where the metric was measured
- function - (only for function metrics) - a string indicating the function where the metric was measured